In 2018, Mark Zuckerberg said that the system Facebook is working on would allow for users to type directly from their brains at a rate that is five times faster than they can type on their smartphones. The eventual plan is to make the hardware component into a wearable that can be easily manufactured, with the goal of the technology also acting as an augmented reality interface. So with this said, we can have a wonderful addition to wearable technology in the near future.

Here are some pretty cool use cases of brain to text technology;

1. Honey Bee Communication

It can be incredibly stressful to communicate within the work place especially if you’re the one running the place. Like Bee hives, the workplace is a very busy place where constant communication is mandatory and is controlled from a central source. The Queen bee regulates the unity of the worker bees the same way the manager or COO maintains the internal communication within the work place. It gets pretty hard and stressful to engage with employees so imagine a device which can you use to simply think the notification or update you want to send out and simply send it to individuals and groups. Writing long emails can be time and energy consuming so why not just think out an email and send. The device could possibly be connected to a texting or speech platform to execute communication in the form of text or audio messages.

2. Mute communication

Hearing aids and eSight technology have been emotionally inspiring innovations ever since they came into existence. We have seen people see and hear for the first time which is undoubtedly a mind raveling experience capable of bring down tears of joy to the eyes of even the toughest people. Sign language has been used for ages now by mute people so I think it’s about time they got a piece of technology themselves. After all, technology has always been about limiting human limitations. So why not talk using your brain? Text to speech technology is readily available so now all were waiting for is brain-to-text technology to hit the tech market and be integrated with text-to-speech and form a device which enables a mute to speak as if he was never mute. To top that, he could even speak in different languages once we get upgrades in those devices as well.

3. Research

Think of the endless research possibilities one can achieve studying how people think. Branches of psychology including forensic sciences, and modern medicine can do wonders exploring the human mind prior to consent of course. It will also pave way for all the emerging brain control technologies. Brain-to-text, brain-to-voice, brain-to-motion and brain-to-control are some top innovations we can expect in the future. Imagine an end to hand held devices for our daily tasks like smart phones. What if we could use our brains to surf the internet via visual aid integrated with the device. The possibilities are endless and we’ve only just touched the tip of the iceberg of the next phase in human evolution.

Artificial Intelligence seeks to turn thoughts of mute people into speech

For the first time, engineers have developed an advanced system capable of translating thoughts directly into speech, marking an important drive toward advanced brain-computer interfaces for people who lack the ability to speak. The system, which was created by researchers at Columbia University, works by monitoring a person’s brain activity, identifying brain signals, and reconstructing the words the individual hears. Powered by speech synthesizers and artificial intelligence, the technology lays the groundwork for helping individuals who are unable to speak due to disability regain their capacity to communicate verbally.

“Our ultimate goal is to develop technologies that can decode the internal voice of a patient who is unable to speak, such that it can be understood by any listener,” Nima Mesgarani, an electrical engineer at Columbia University who led the project, told Digital Trends by email.

Parts of the brain light up like a Christmas tree — neurons firing left and right — when people speak or even simply think about speaking. Neural researchers have long endeavored to decode the patterns that emerge in these signals. But it isn’t easy. For years, scientists like Mesgarani have tried to translate brain activity to intelligible thought, using tools like computer models to analyze visual representations of sound frequencies.