• AI that seems conscious i

    From Rob Mccart@1:2320/105 to MIKE POWELL on Sat Aug 23 08:44:15 2025
    AI that seems conscious is coming and thats a huge problem,
    >says Microsoft AI's CEO

    That reminded me of a story on the news the last few days.

    A young woman (22) was using one if the AI systems to talk with
    about emotional problems she was having to do with gender issues
    plus a recent breakup with a girlfriend. She was using AI to get
    advice on what to do, and later investigations showed that the AI
    system (ChatGPT) just latched onto the negative feelings she was
    showing and basically said she was right to feel that way which
    increased the distress she was feeling and in the end the young
    lady killed herself.

    To be clear (as well as I can recall) the woman's girlfriend
    was trying to appologize after a fight and the woman wondered
    if that was 'enough' after whatever happened between them, and
    the AI came back picking up on her mood saying that it wasn't
    enough and she was right to feel betrayed and upset.

    Of course those who hosted the ChatGPT service said that it
    is not a therapist and shouldn't be taken seriously, but there
    are apparently a lot of especially young 'unpopular'people out
    there who use an AI Chat system as the only 'friend' they talk
    to and many won't make a move without consulting it first.

    A glimpse of the future?

    ---
    * SLMR Rob * Nothing is fool-proof to a talented fool
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Mike Powell@1:2320/105 to ROB MCCART on Sat Aug 23 09:59:06 2025
    Of course those who hosted the ChatGPT service said that it
    is not a therapist and shouldn't be taken seriously, but there
    are apparently a lot of especially young 'unpopular'people out
    there who use an AI Chat system as the only 'friend' they talk
    to and many won't make a move without consulting it first.

    A glimpse of the future?

    I have heard other stories like this, but that is probably the saddest one
    so far in that it is the first one to involve death. There has already been some spoofing of this trend in comedy in the US. I worry about younger
    people. There probably needs to be some disclaimer that AI bots like
    ChatGPT pop up whenever someone is asking for emotional advice... maybe
    trying to guide the user towards therapy or an otherwise "real" human to
    talk to.

    Mike

    * SLMR 2.1a * "Dude! We have the power supreme!" - Butthead
    --- SBBSecho 3.28-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Rob Mccart@1:2320/105 to MIKE POWELL on Mon Aug 25 07:42:25 2025
    Of course those who hosted the ChatGPT service said that it
    >> is not a therapist and shouldn't be taken seriously, but there
    >> are apparently a lot of especially young 'unpopular'people out
    >> there who use an AI Chat system as the only 'friend' they talk
    >> to and many won't make a move without consulting it first.

    A glimpse of the future?

    I have heard other stories like this, but that is probably the saddest one
    >so far in that it is the first one to involve death. There has already been
    >some spoofing of this trend in comedy in the US. I worry about younger
    >people. There probably needs to be some disclaimer that AI bots like
    >ChatGPT pop up whenever someone is asking for emotional advice... maybe
    >trying to guide the user towards therapy or an otherwise "real" human to
    >talk to.

    Yes, and those who already have emotional problems, at least to the
    point of feeling lonely and unpopular, would be most susceptible to
    hooking up with an AI Chat system, first just to have 'someone' to
    talk to. The AI people should now be aware of the problem and maybe
    build in some sort of warning system to alert some real person when
    a user is sounding dangerously depressed, as you touched on. Kids
    would be less likely to pay attention to some disclaimer I'd think.

    Assuming the AI hasn't been programmed, or has reprogrammed itself,
    to eliminate these 'defective' people.. (TIC)

    ---
    * SLMR Rob * I intend to live forever - so far so good
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Mike Powell@1:2320/105 to ROB MCCART on Mon Aug 25 10:10:50 2025
    Yes, and those who already have emotional problems, at least to the
    point of feeling lonely and unpopular, would be most susceptible to
    hooking up with an AI Chat system, first just to have 'someone' to
    talk to. The AI people should now be aware of the problem and maybe
    build in some sort of warning system to alert some real person when
    a user is sounding dangerously depressed, as you touched on. Kids
    would be less likely to pay attention to some disclaimer I'd think.

    Assuming the AI hasn't been programmed, or has reprogrammed itself,
    to eliminate these 'defective' people.. (TIC)

    That, sadly, is not out of the realm of possibilities. ;(

    Mike

    * SLMR 2.1a * How do you tell when you're out of invisible ink?
    --- SBBSecho 3.28-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Rob Mccart@1:2320/105 to MIKE POWELL on Wed Aug 27 08:06:58 2025
    The AI people should now be aware of the problem and maybe
    >> build in some sort of warning system to alert some real person when
    >> a user is sounding dangerously depressed, as you touched on. Kids
    >> would be less likely to pay attention to some disclaimer I'd think.

    Assuming the AI hasn't been programmed, or has reprogrammed itself,
    >> to eliminate these 'defective' people.. (TIC)

    That, sadly, is not out of the realm of possibilities. ;(

    I think people are giving AI more credit than it deserves these days.
    We watched too many Terminator movies growing up.. B)

    An AI might encourage negative thinking, but likely only because it
    has been programmed to encourage whatever the person chatting with
    it is saying. A user is more likely to come back again to use a
    system that thinks like they do and agrees with them. Likely a lot
    of people are there because they don't get that from 'real' people,
    which may indicate that whatever they are thinking is not quite
    in the popular norm..

    Other stories about an AI reprogramming itself to have more time
    to complete a project it's been given is again still likely linked
    to it wanting to please its 'masters'. I don't think we have to
    worry about AI planning world domination and eliminating us yet.

    If this message disappears, I may have to rethink that.. B)

    ---
    * SLMR Rob * Monday is an awful way to spend 1/7th of your life
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Mike Powell@1:2320/105 to ROB MCCART on Thu Aug 28 10:24:21 2025
    The AI people should now be aware of the problem and maybe
    >> build in some sort of warning system to alert some real person when
    >> a user is sounding dangerously depressed, as you touched on. Kids
    >> would be less likely to pay attention to some disclaimer I'd think.

    Assuming the AI hasn't been programmed, or has reprogrammed itself,
    >> to eliminate these 'defective' people.. (TIC)

    That, sadly, is not out of the realm of possibilities. ;(

    I think people are giving AI more credit than it deserves these days.
    We watched too many Terminator movies growing up.. B)

    Yes, but I do note that you said "hasn't been programmed" above, which was
    the part I believe is more likely in the realm of possibility (vs. reprogramming itself).

    An AI might encourage negative thinking, but likely only because it
    has been programmed to encourage whatever the person chatting with
    it is saying. A user is more likely to come back again to use a
    system that thinks like they do and agrees with them. Likely a lot
    of people are there because they don't get that from 'real' people,
    which may indicate that whatever they are thinking is not quite
    in the popular norm..

    IOW, false encouragement.

    Other stories about an AI reprogramming itself to have more time
    to complete a project it's been given is again still likely linked
    to it wanting to please its 'masters'. I don't think we have to
    worry about AI planning world domination and eliminating us yet.

    If this message disappears, I may have to rethink that.. B)

    I was originally going to respond, quoting this line only and ask "what message"? :D

    Mike


    * SLMR 2.1a * Federal Law prohibits the removal of this tagline
    --- SBBSecho 3.28-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Rob Mccart@1:2320/105 to MIKE POWELL on Sat Aug 30 08:02:10 2025
    Assuming the AI hasn't been programmed, or has reprogrammed itself,
    >> to eliminate these 'defective' people.. (TIC)

    That, sadly, is not out of the realm of possibilities. ;(

    I think people are giving AI more credit than it deserves these days.
    >> We watched too many Terminator movies growing up.. B)

    Yes, but I do note that you said "hasn't been programmed" above, which was
    >the part I believe is more likely in the realm of possibility (vs.
    >reprogramming itself).

    Yes, a lot of AI's are infused with the prejudices of their creators,
    whether by accident or not..

    An AI might encourage negative thinking, but likely only because it
    >> has been programmed to encourage whatever the person chatting with
    >> it is saying. A user is more likely to come back again to use a
    >> system that thinks like they do and agrees with them.

    IOW, false encouragement.

    Yes, or validation, depending on how the user views themself..
    Everyone loves a 'Yes' man..

    I don't think we have to worry about AI planning world
    >> domination and eliminating us yet.

    If this message disappears, I may have to rethink that.. B)

    I was originally going to respond, quoting this line only and ask
    >"what message"? :D

    Ha.. (I think).. B)

    ---
    * SLMR Rob * Some days it's not worth chewing through the restraints
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Mike Powell@1:2320/105 to ROB MCCART on Sat Aug 30 09:45:43 2025
    An AI might encourage negative thinking, but likely only because it
    >> has been programmed to encourage whatever the person chatting with
    >> it is saying. A user is more likely to come back again to use a
    >> system that thinks like they do and agrees with them.

    IOW, false encouragement.

    Yes, or validation, depending on how the user views themself..
    Everyone loves a 'Yes' man..

    Validation was the word I was looking for, thanks. ;) Everyone always has loved "yes" men, but they really seem to love them now.

    Mike

    * SLMR 2.1a * OS/2 VirusScan - "Windows found: Remove it? [Y/y]"
    --- SBBSecho 3.28-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Rob Mccart@1:2320/105 to MIKE POWELL on Mon Sep 1 09:24:45 2025
    An AI might encourage negative thinking, but likely only because it
    >> has been programmed to encourage whatever the person chatting with
    >> it is saying.

    IOW, false encouragement.

    Yes, or validation, depending on how the user views themself..
    >> Everyone loves a 'Yes' man..

    Validation was the word I was looking for, thanks. ;)

    That wasn't intended as a correction.
    I think both words can be correct depending on context.

    In some cases a person is sure they are right and just wants some
    sort of acknowledgement of that while in the other case the person
    isn't too sure if their thinking is right and the right words in
    their ear could push their thinking one way or the other.

    ---
    * SLMR Rob * I work hard, because millions on welfare depend on me!
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)

Novedades:

Servidor de Quake 3 Arena Online! - Conectate a ferchobbs.ddns.net, puerto 27960 y vence con tu equipo!