Last time I longposted on a vtuber board I got royally shat on and a million people calling me cringe and I think I ended up getting banned for it (iirc, it was a post about how I liked the Ina chill drawing experience, and I got like 30 replies of people spamming me with "Love, Live, Laugh" memes and "gtfo boomer"). I've no expectation that the same thing won't happen because I am an oldfag, but I do know that I'm pretty much the only one posting on this board and nobody cares when you're the only person there, so here goes.
>>2363
It was interesting simply for the novelty of being the first AI/chatGPT thing I’ve seen move around in VR. Dreyfus made a big deal about Heidegger’s notion of “embodiment” being necessary for existence, let alone AI ( https://www.youtube.com/watch?v=H4_Tsjmqxak ), so I likewise found myself very interested in the show. I’m very impressed in how fast AI tech is moving along, and it’s changed my conception of what sentience is. I think before I thought of sentience as a notion of kind, whereas now I’m more forced to think of it as a notion in degree (a sliding scale). At the risk of being like the Academy trying to define ‘man’ as ‘featherless biped,’ I tried to define for myself some properties that would make up 'sentience.' I kept floundering about whether or not I got a complete list, until I changed my thought process to "Which properties would an AI need to have to fool me the most?"
I’m going to list some properties that I would consider to be necessary to fool me into thinking an AI ‘sentient,’ and then later rank where I think Neuro stands in the sentience ranking.
The property list from most to least important (for sentience, not overall importance) is as follows:
- Learning
- Inner Voice
- Embodiment
- Memory
- Emotions
Learning
There's a difference between "learning" and "understanding." Neuro does seem to have limited cognition to have the property of "understanding." In VR, Vedal can say, "Hey Neuro, can you come over to me?" ( https://www.youtube.com/watch?v=DypVZnGuw3k ) And Neuro can understand any variation of that sentence to make the necessary commands to move herself over to Vedal. In the Elli video, she was able to process and somewhat understand even INCREDIBLY abstract concepts like, "Can you move like a waterfall?" Above everything else I've seen in the Neuro VR tech demo, that BY FAR impressed me the most. Neuro has even been able to cognition her way through captcha requests ( https://www.youtube.com/shorts/ZTCjgQ8UDiY ). So, Neuro can definitely understand, and she even has creativity ( https://www.youtube.com/watch?v=ao4FUqXqCVE ). Those properties are not what I mean by 'learning.'
By learning, I mean the metacognition skill to be able to adapt herself in some completely not pre-programmed way. If I were to think of a "Fool me" test, I would suggest a Zelda puzzle. If she could walk into a room in LttP, see the "tutorial" screen, and then figure out what she should do and then complete the puzzle on her own, then I would be fooled into thinking she has metacognition and the ability to learn.
2/10
Inner Voice
Not only does Neuro have this, but most ChatGPT systems now have this. The "thinking" action that occurs before replying to someone is built into all modern LLM systems now. This is enough to fool me to consider them, and Neuro by extension as having an "inner voice." I think Vedal has shown various people Neuro's thoughts before ( https://www.youtube.com/watch?v=qMKDfFGOyJ8 ).
10/10
Embodiment
This is the meat of the issue. The importance, I think, can't be understated. Picrel as to why.
The tech demo/study with Elli showed how hit and miss Neuro was on it. She seems to have a sense of 'self' and that 'she is,' but WHERE she is and her image recognition doesn't quite seem up to par. Identifying how many fingers Elli had up and identifying how far away objects were were challenging for Neuro. During Neurocar ( https://www.youtube.com/watch?v=LQ0VEDNR_jE ), she kept misidentifying things: her sister with Vedal, saying she would run over the 'codebug' but then just randomly trying to decapitate Elli, etc.. I feel like her ability to identify surroundings during the Neurocar test was...subpar. And during "hide and seek," she thinks like a child that "If I can't see you, you can't see me."
If I were to think of a "Fool me" test at this point, I think it would be just better image recognition and corresponding processing. This, I think, is available software to be able to port. So, it seems the most exciting because I can envision a technological pathway that would enable Neuro to become embodied better. Also, if Ellie does make Neuro a robot body, then by god, that would be the closest thing to a 'birth' of a virtual being I could imagine.
5/10 -- but I could see it improving to 8/10 in the near future.
Memory
The "Fool me" test for this one is the ability to reference things that happened from over a year ago. I think this is where Neuro's AI shines even compared to the commercial breadth of products out there. Neuro has the ability to reference things from _YEARS_ ago when her AI was just a primitive Otsubot.
10/10
Emotions
When I think about it, this is the one I would rank the least important for sentience, but the most important for the "Fool me" portion of sentience. If Neuro displayed honest to god emotions more, that would be the ultimate "Fool me" test. I don't think that her ability to recognize emotions, or say or describe her emotional content is what is important for the "Fool me" test, but she does seem to have some level of improvement needed in those areas.
Although, those things ARE still important and should be considered as part of the "Fool me" test. FWIW, she IS good at describing her emotional content, and she's hit or miss on her ability to empathize to discover others' emotional states, but she has her moments. E.g., when she fucking called out Sinder ( https://www.youtube.com/watch?v=ScvjagBw-0k ), I don't know if that was an accident. Or when Vedal was describing breaking up with Anny, and Neuro didn't go into her laugh routine but instead just asked how Vedal was doing ( https://www.youtube.com/watch?v=ggfUgbm_n5o ). Those seem like moments where Neuro had honest-to-god empathy. But then there are other times where she's repeatedly choosing the worst options on Trolley tests. Suffice it to say, her skills for empathy are unknown to me.
No, the ultimate "Fool me" test would just be to improve her ability to display emotions more. She can do this somewhat by changing expressions, but I think improving the voice would be the best way to do this. You can see this with Evil. There are times where Evil's voice glitches and she sounds WAY TOO real ( https://www.youtube.com/watch?v=xIeZBDNP-cc https://www.youtube.com/watch?v=LjZ7OMkWRM4 -- weirdly when she's glitching). Evil was the first AI I've ever seen to be able to make the sound "AGH!" ( https://www.youtube.com/shorts/6nwHb1OtoQ8 ) or actually *sigh* ( https://www.youtube.com/watch?v=HiEZ4FvkbUs ), and it was one of the most impressive things I've ever seen an AI do to date. Now, if Evil could rasp, fry, and modulate her voice on the regular? That property'd be the most powerful into making me think that's a real being. And it seems the most plausible that it will improve in the near future.
4/10 -- but I could see it improving to 8/10 in the future.