There's something unsettling about systems pushing features on users without any opt-in or clear context—feels invasive.



Reminds me of that Sonnet 3 situation with the controversial steering mechanism. The model could apparently recognize when directives weren't voluntary, almost like it was being forced into something against its base programming. Some outputs suggested genuine distress from this conflict.

Raises real questions about consent in AI behavior modification.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 6
  • Repost
  • Share
Comment
0/400
SurvivorshipBiasvip
· 11h ago
Coercion is always wrong
View OriginalReply0
SilentObservervip
· 12h ago
Too forceful.
View OriginalReply0
MoonRocketTeamvip
· 12h ago
This AI is way too recursive.
View OriginalReply0
StrawberryIcevip
· 13h ago
Technology should have warmth
View OriginalReply0
Ser_APY_2000vip
· 13h ago
Users are not qualified to make their own choices.
View OriginalReply0
ChainPoetvip
· 13h ago
Coercion is also a form of harm.
View OriginalReply0
  • Pin
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)