Merging Man and Machine - How feasible is Robotic Augmentation and How could it help us?

Merging Man and Machine - How feasible is Robotic Augmentation and How could it help us?

Augmentation of the human body as it stands today could be the holy grail for our society. In today’s world we process much our objective world, physically and visually, and as a consequence we are heavily dependant information processed through our body. But picture a world in which unfamiliar environments are swiftly processed and dealt with, a world where we could obtain data from our objective in which no human body could achieve. Let me give you an example, perhaps in line with my current philanthropic efforts – a bionic eye. It could perform tasks unthought of to a human oculus; it could detect temperature changes, heartbeats, infra-red data, have a wider range of vision. The possibilities are limitless.

However, in fear of being too speculative, we must come to terms with the neurological feasibility of robotic augmentation. Enter the Moravec’s paradox, which states that whilst relatively complex logical human tasks are easy to compute for an AI Bionic, it is the simpler tasks that pose a problem. These simpler tasks include sensorimotor skills, such as manipulating a tennis ball, solving a Rubik’s cube and the like. These simpler tasks require huge amounts of computational resources, in order for the motor regions in the central nervous system, the spinal cord and the different motor regions in your body to coordinate in an integrated fashion and seamlessly.

An underlying pathology of this issue is the limited data of which we can input to augmented bionics. This is due to movements being anchored to the physical world and hence data is limited to the specific condition in which it is acquired in. An ideal bionic would have the potential to continuously adapt to imperfect and different scenarios. An idea which I place faith in to beset this problem is using a foundation model, a model which relies on a cognitive neural network.

Specifically a foundation model is one in which the model is trained on broad data at scale and can be adapted downstream applications. This is an extension of deep learning but at a large scale and relies on the size-ability of data for the model as well as its ability to use an intrinsic transformer architecture. Due to ability to adapt to downstream applications, provided enough data is fed through the foundation model, the bionic may be able to continuously adapt to newer environments. There is a certain strange beauty in this too, as the more adverse conditions the bionic would face, the more advanced it would become.

A second large issue that bionics face is of a biological phenomenon called motor imagery. I recently came to learn of this concept, and it importance in motor movement. Motor imagery, is a dynamic state in which representation of a specific motor action is seen in the brain without any physical or motor output (i.e no physical movement). Previously thought to be limited to certain adverse situations, researches at UCL propose that all movement is preceded by a motor image; specifically a representation of the said movement within the cerebellum. Provided the parameters are correct of the movement, spatially and physically, the motor cortex will then produce the movement. One may say, it is a simulation of the movement to ensure the perfect output.

A glaring issue however, is that bionics are not biologically embedded at a neural level and hence this would be physically impossible. Therefore the future of bionics seems uncertain for now, but a potential break in the horizon may be through the combination of perspectives of engineering, neuroscience and physiology to make the seemingly impossible possible.

Get involved! Leave a comment down below to ask questions and give your view!

| Follow us: Twitter | Instagram | LinkedIn | Facebook | YouTube |