Global Personal Ai = The 3rd Self + The 7th Sense + The 5th Dimension
When an always on via a 4D time based typology + a 5D UI ( a controlling system over not only time but all other physical dimensions ) then our 3D world ( in ear ) products and interfaces via voice are here. With full voice always on processing we are again in the early days of Apple: a new usable everywhere, economically viable product line that starts right with schools and basic consumer user needs. But How? Examples like The 3rd Self technology and products like Tiiny Ai are an excellent example of such a convergence and Nature Magazine in Ultrasound To Read Minds – Does the science stand up? ” shows the exact path how it will happen. But it’s MIT that will win. Just like Xerox PARC did with the GUI, Ethernet, Post Script Printing, The SmallTalk object Oriented Program language. Convergence always wins.
We can see this in it’s early stages is from 2001 with The 3rd Self Architecture by iGNITIATE and here on our site detailing a real time interactive training data processing via Galvanic Skin Response (GSR) / skin conductance Electrodermal Activity (EDA) utilizing simple electronics plus basic software running on a Palm Pilot to act as a sensitive, real-time indicator of psychological or physiological cognitive load for training systems. This incredibly simple system was implemented in the tangible designs of OM – The Outlook Monitor fr Fujitsu in 2006 and slimmed down to a tiny, mono functionality hand held device the same year.
Due to the necessary processing to handle more robust datasets The 3rd Self System via OM was modified for sports applications only utilizing a fraction of The 3rd Self system architecture’s over all capabilities then later in 2021 and just before the end of Covid where The 3rd Self System was the basis for TEDx Rome’s Third Self launch as detailed on LinkedIn where we see the natural conclusion of the above, yet the Ai explosion yet to take place starting in early 2002 R&D efforts.
By 2015 MIT’s media lab was pulling together these and other disparate technologies as new hardware systems began to emerge and specifically nVIDIA’s Ai chip architectures made the processing power necessary for robust signal processing, ML and Ai inference models to take place on the desktop. Enter MIT’s AlerEgo lab in 2018. For years MIT’s AlterEgo Silent Speech processing capability and later in 2025 with the spin-out of AlterEgo had a slimmed down 2018 rather cumbersome technology with a 92% accuracy via large PC’s / servers necessary to run the system ( think Xerox PARC’s Alto system in the 1980’s and 20 years after ” The Mother of All Demos ” on YouTube from 1968 by Douglas Engelbart, the god father of modern computing and where like Englebart who created his technology based on more than 10 years of R&D inside DARPA and ARPA labs is where we see the same trajectory of the AlterEgo system today.
By the time MIT in early 2025 had created a paired down device like Alter Ego to be coupled with a custom lightweight computer needed to run it’s CNNs and BiLSTMs models for processing and interpreting neuromuscular signals from the user’s face and neck, still this was a chunky device. Lab Technology. Revolutionary Lab Technology. By 2024 and later in March 2026 with the incredible launch of Tiiny Ai on Kickstarter which is an external pocket off line 128B parameter separately battery powered juggernaut Ai system that can fit in a pocket. Thus again we see still fully realizable 0.5 ; 1 ; 3 ; 5 ; 10 ; 20 year R&D and NPD or R&D^3 or R&D cubed product development timelines similar to the iPhone, Commodore 64, iPad, etc., and the radical changes in computing that come from that.
But what this does not rule out is patent and trademark pincer moves similar to in billions Season 7, Episode 7 where Michael Prince tells Dr. Mark Ruloff he will put him out of business by buying up everyone else’s adjacent technology, patents and trademarks, filing his own patents and trademarks and putting the scientists work and company out of business. Enter Merge Labs started by Sam Altman and OpenAi utilizing unrelated yet comparable read and write brain activity using ultrasound technology technology and as described in Nature Magazine and with the emergence of newly, highly experimental Neuromorphic Computing Hardware technology at least 10 to 20 years away, we see the full emergence of The 3rd Self via Silent Speech.
The big question: how will firms in the future adapt to and enable their staff via such a technology ? Will system such as this ever not monitor thoughts, emotions and psychological responses to external stimulus ? Will the system return responses with ultrasound, reverse Galvanic Skin Response (GSR) / skin conductance Electrodermal Activity (EDA) skin response technology ? When the lines blur between thought and action And certainly without physical use switches not being possible with any such technology as the above will state of the ” Art ” always on, an Ai being just a pencil or a pencil always drawing ?





































