Brain_map.png
[Hide] (45.7KB, 2288x1200) >>755
I see.
Some ideas
-Lower Resolution Cameras, this will reduce the amount of visual data, as well as make it cheaper.
-Separate Computers, you have different computers/microcontrollers doing different things, just like how a brain operates. It would cost more, but it frees up a lot of computing power and bandwidth*
-Numbing, your brain ignores constant input, and only notices if something is different. This is why you don't smell your own house, see your nose, and the classic "you are manually breathing" prank.
-And ofc using Zip.
How do commercial humanoid robots do it?
*Some examples include
-A visual processing unit
-A motor control unit. When you reach for something, you don't think about "move your arm up, extend elbow 40 degrees, open fingers", you just automatically do it
I made a brain map for an architecture that could work. It's partially based on how roleplay AIs vocalize physical actions, such as "*touches you*", "*reaches out*", "*feels you touch her hand*", *"sees your new outfit*"