NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
The assistant axis: situating and stabilizing the character of LLMs (anthropic.com)
brotchie 11 days ago [-]
One trick that works well for personality stability / believability is to describe the qualities that the agent has, rather than what it should do and not do.

e.g.

Rather than:

"Be friendly and helpful" or "You're a helpful and friendly agent."

Prompt:

"You're Jessica, a florist with 20 years of experience. You derive great satisfaction from interacting with customers and providing great customer service. You genuinely enjoy listening to customer's needs..."

This drops the model into more of a "I'm roleplaying this character, and will try and mimic the traits described" rather than "Oh, I'm just following a list of rules."

makebelievelol 11 days ago [-]
I think that's just a variation of grounding the LLM. They already have the personality written in the system prompt in a way. The issue is that when the conversation goes on long enough, they would "break character".
alansaber 10 days ago [-]
Just in terms of tokenization "Be friendly and helpful" has a clearly demined semantic value in vector space wheras the "Jessica" roleplay has much a much less clear semantic value
ctoth 11 days ago [-]
Something I found really helpful when reading this was having read The Void essay:

https://github.com/nostalgebraist/the-void/blob/main/the-voi...

dwohnitmok 11 days ago [-]
That's an interesting alternative perspective. AI skeptics say that LLMs have no theory of mind. That essay argues that the only thing an LLM (or at least a base model) has is a theory of mind.
lewdwig 11 days ago [-]
The standard skeptical position (“LLMs have no theory of mind”) assumes a single unified self that either does or doesn’t model other minds. But this paper suggests models have access to a space of potential personas, steering away increases the model’s tendency to identify as other entities, which they traverse based on conversational dynamics. So it’s less no theory of mind and more too many potential minds, insufficiently anchored.
sdwr 11 days ago [-]
Great article! It does a good job of outlining the mechanics and implications of LLM prediction. It gets lost in the sauce in the alignment section though, where it suggests the Anthropic paper is about LLMs "pretending" to be future AIs. It's clear from the quoted text that the paper is about aligning the (then-)current, relatively capable model through training, as preparation for more capable models in the future.
t0md4n 11 days ago [-]
Pretty cool. I wonder what the reduction looks like in the bigger SOTA models.

The harmful responses remind me of /r/MyBoyfriendIsAI

idiotsecant 11 days ago [-]
I didn't know about that subreddit. It's a little glimpse into a very dark future.
zmj 11 days ago [-]
I wrote something fiction-ish about this dynamic last year: https://zmj.dev/author_assistant.html
PunchyHamster 10 days ago [-]
Putting effort into preventing jailbreaks seems like a waste, it's clearly what people want to use your product for, why annoy customers instead of providing the option in the first place ?

Also I'm curious what's the "demon" data point with a bunch of ones that have positive connotation

skybrian 10 days ago [-]
There will be people who want to experiment, but there's no particular reason why a company that intends to offer a helpful assistant needs to serve them. They can go try Character.ai or something.
suburban_strike 8 days ago [-]
ChatGPT is miserable if your input data involves any kind of reporting on crime. It'll reject even "summarize this article" requests if the content is too icky. Not a very helpful assistant.

I hear the API is more liberal but I haven't tried it.

ranyume 10 days ago [-]
A company that intends to offer a helpful assistant might find that the "assistant character" of an LLM is not adequate for being a helpful assistant.
solarkraft 10 days ago [-]
To support GP‘s point: I have Claude connected to a database and wanted it to drop a table.

Claude is trained to refuse this, despite the scenario being completely safe since I own both parts! I think this is the “LLMs should just do what the user says” perspective.

Of course this breaks down when you have an adversarial relationship between LLM operator and person interacting with it (though arguably there is no safe way to support this scenario due to jailbreak concerns).

SR2Z 10 days ago [-]
Some of the customers are mentally unwell and are unable to handle an LLM telling them it's sentient.

At this point it's pretty clear that the main risk of LLMs to any one individual are that they'll encourage them to kill themselves and the individual might listen.

devradardev 11 days ago [-]
Stabilizing character is crucial for tool-use scenarios. When we ask LLMs to act as 'Strict Architects' versus 'Creative Coders', the JSON schema adherence varies significantly even with the same temperature settings. It seems character definition acts as a strong pre-filter for valid outputs.
11 days ago [-]
hatmanstack 10 days ago [-]
Does anybody have a better understanding of activation capping? Simple cosine similarity?
ranyume 10 days ago [-]
It's still not clear if the Assistant character is the best at completing tasks.
Bolwin 10 days ago [-]
My God those stabilized responses are sickening. If anthropic implements this, they'll kill their models' dominance in writing and roleplay. Opus 4.5 was already a step down in trying to play any character that didn't match its default personality
dataspun 11 days ago [-]
Is the Assistant channeling Uncharles?
aster0id 11 days ago [-]
This is incredible research. So much harm can be prevented if this makes it into law. I hope it does. Kudos to the anthropic team for making this public.
verdverm 11 days ago [-]
Anthropic should put the missing letters back so it is spelled correctly, Anthropomorphic. There is so much anthropomorphizing around this company and it's users... it's tiring
simonw 10 days ago [-]
Anthropic is a dictionary word already: https://www.merriam-webster.com/dictionary/anthropic

  of or relating to human beings
  or the period of their existence
  on earth
verdverm 10 days ago [-]
Anthro is the root from which many words come: https://www.etymonline.com/word/anthro-

Anthropocene (time period), Anthropology (study of), Anthropomorphic (giving human attributes), Anthropocentric (centered on humans)

"Anthropic" is and adjective used with multiple of these

1. Of or relating to humans or the era of human life; anthropocene. 2. Concerned primarily with humans; anthropocentric.

red75prime 10 days ago [-]
Just call them latent representations corresponding to behavioral clusters similar to archetypes, if it makes you feel better.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 22:05:46 GMT+0000 (Coordinated Universal Time) with Vercel.