Just like information technology, medical technology is developing quickly. In this article we look at some emerging medical technologies such as exoskeletons, 3D bioprinting and wearables. These technologies are in the early stages of development, but are also very real, and could bring huge benefits to patients in the future.
Brain-machine Interfaces and Exoskeletons
A brain-machine interface is a device which interprets electrical activity in the brain and uses that information to control something exterior to the brain such as a computer. The device reading the electrical activity could be placed on the exterior of the skull or implanted into the brain. Neuralink, a company founded by the tech entrepreneur Elon Musk, is making significant research efforts to develop a brain-machine interface. Their approach will be to implant tiny electrodes into a brain, using a highly precise surgical robot . They recently made headlines across the world when they demonstrated the Neuralink working in the brain of a pig called Gertrude .
Simple brain-machine interfaces have existed for years – but you might not have guessed them straight away. One example is a cochlear implant, a treatment for severe permanent deafness. This device receives sound from the surrounding environment, processes it, then sends signals to the auditory nerve . Cochlear implants have made it possible for people to hear again, or for children to hear for the first time. Can you imagine what other brain-machine interfaces could do in the future?
How about using a brain-machine interface to provide a paralysed patient the ability to control an exoskeleton, meaning they could sit up, walk and maybe even play sports? Whilst this technology isn’t available as a treatment now, it is being developed. The exoskeleton hardware exists and is being refined by companies such as ReWalk , with devices available to patients now. Future developments need to see the exoskeletons improved to be smaller, lighter, and more energy efficient (to last longer).
The use of a brain-machine interface to control an exoskeleton has been tested too, in a study at Grenoble University Hospital published in 2019 . A patient was implanted with electrodes in the upper limb sensorimotor area of the brain, and over two years, learned to use the exoskeleton to walk and perform arm movements. As the technology is so novel, the patient was only allowed to use the exoskeleton in a lab, attached to a safety harness to prevent a fall. However, this study shows the amazing future that will be ahead of us for these technologies, with the potential to completely change the quality of life for disabled patients.
You may have heard of collagen in the context of cosmetic products – it is often used as a “dermal filler” to plump skin and reduce the appearance of wrinkles. Collagen also has important uses in many medical devices. Such uses include [6,7]:
- in bone graft substitutes, to provide a matrix (like a scaffold) for bone to regrow (in combination with other materials),
- to repair cartilage in joints, again as a scaffold, to allow regeneration of natural tissue,
- in advanced wound dressings, to promote healing and control moisture,
- heart valve replacement.
Collagen is therefore a useful and versatile material. It is a natural material, and the most common way to obtain it is from bovine (cow) sources. Because collagen is obtained from an animal tissue source, we have to be very careful to ensure biological risks (such as bacterial or viral infection) are controlled and minimised. It wouldn’t be acceptable to use a collagen wound dressing on a patient, only to introduce a bacteria or virus. Another particular risk related to bovine collagen is transmissible spongiform encephalopathy (TSE) – a neurodegenerative disease caused by mis-folded proteins.
The risks of infection or TSE from bovine collagen are very well controlled – international standards (BS EN ISO 22442-1 ) clearly define how they should be processed. However, a company called Jellagen  has found a new source of collagen – from jellyfish. One of the biggest benefits of jellyfish derived collagen is that it is a very safe source – potentially disease vector and virus free – thereby almost eliminating the biological risks associated with bovine collagen. Another advantage is that it is a “type 0” collagen which can be used with any application, whereas bovine collagens are specific to the anatomy. The company also claims that because Jellyfish are not mammals, obtaining collagen from this source is considered more ethical (I’ll leave that for you to decide).
One day, if you have a heart problem, you could have a replacement valve implanted that was made from jellyfish collagen. How futuristic is that?
You’ve probably heard of 3D printers by now, which can make objects out of plastic or metal, layer by layer. There is growing research in the area of 3D printing biological materials – and maybe one day even whole organs.
An article in the journal “Nature Biotechnology” in 2014 provided a great overview of bioprinting technology . One of the main considerations for the process is the creation of a bio-ink (the material that is going to be printed). This is made up of a base or structural material (such as synthetic or natural polymers, or a more complex extra cellular matrix), and a chosen cell type (differentiated cells, or stem cells). The bio-ink can be printed in different ways; one way is a “biomimicry” approach, which requires a very detailed design of the structure or organ, to be fully replicated. An alternative approach, so called “self-assembly”, uses the properties of embryonic organ development to print a simple structure which will naturally develop into the more complex desired outcome (like an area of skin, or an entire kidney).
One particularly successful application for bioprinting is the printing of human skin. In 2017, scientists at Universidad Carlos III de Madrid reported on a prototype bioprinter than can print fully functional human skin . Applications of bioprinted skin could be the treatment of severe burns, or for non-animal and non-human testing of pharmaceutical or cosmetic products.
Bioprinters are starting to become commercially available (although they are still very specialist), such as those from Allegro  or Cellink . If you’re particularly interested in 3D bioprinting, a recent published review article (Askari et al, 2021) provides a more up to date, and very detailed review of current technological capabilities .
“Wearables” is a term we have heard a lot over the last few years; it refers to wearable technology such as smart watches. Wearable medical devices are an interesting area of development, allowing physical parameters such as heart rate to be measured.
A review article published in 2018  provides an overview of the potential applications of wearable medical sensors. These sensors can measure biochemical aspects, physiological parameters or movement. An example of a biochemical measurement would be continuous glucose monitoring by a sensor applied to the skin (with a needle measuring glucose levels just underneath the skin); this allows people with diabetes to easily take regular glucose level measurements. A physiological parameter such as heart rhythm can be measured by a non-invasive patch applied to the patient’s chest. Movement can easily be measured by a smart watch or further wrist-band type devices, allowing monitoring of activities or specific movements such as gait.
One of the major advantages of wearable medical devices is that diseases can be measured whilst patients are at home. This could allow patients to avoid time in hospital to monitor a disease, reduce the number of trips to a doctor to take measurements, or have a greater chance of detecting a disease in the first place (for example, from heart rate and ECG measurements of a smart watch).
Some of the challenges in developing wearable medical devices include:
- ensuring that the measurements are accurate,
- ensuring that the obtained data is private and secure,
- designing devices that are small enough to be convenient for patients,
- designing devices that are low enough cost to be used by health systems with budget constraints.
CRISPR gene editing
Gene editing is the idea that genes (in a human, animal or plant) can be turned on or off, or changed, leading to a desired outcome such as disease prevention. This has been possible for some time, but was very expensive. CRISPR gene editing has made this cheap and easy . CRISPR utilises the “Cas” proteins in bacteria, which usually help defend the bacteria against viruses, to target specific sequences of DNA.
One possible way to use this technology is to potentially eliminate the parasite that causes malaria. Malaria is spread by mosquitos that are infected by a Plasmodium single-celled organism. Some research has shown that by editing the genome of the mosquito, it can be made to be resistant to the Plasmodium parasite, stopping the spread of the parasite and the disease . Another approach may be to edit the genome of the parasite itself, to prevent it reproducing .
Some more potential uses of CRISPR gene editing, shown by early research, are :
- repair of a genetic mutation which causes retinitis pigmentosa (leading to blindness),
- influence the production of proteins in the body to treat diseases such as Huntington’s,
- enhancing cells of the immune system to identify and kill cancerous cells,
- fix a genetic mutation that is responsible for Duchenne’s muscular dystrophy.
Some people are concerned that the ability to edit genes could lead to “designer babies” – the idea that the gene editing technology could be used not to prevent disease, but to engineer children to have certain eye colours or other characteristics. It is an open question in the scientific community as to which uses of gene editing would be considered ethical .
Telepresence robotics and augmented reality
“Telepresence robotics” is like mixing a remote control robot with video calling. Surgical robotics have been available for some time, but controlled by the surgeon in the same room. They allow the surgeon to be more precise than the human dexterity can achieve. The next step in the development of this technology is for the surgeon to control the robot from another location, using electronic communication to allow them to control the robot as if they were in the same room (hence “telepresence”). A BBC article  describes how a surgeon in Canada used telesurgery to provide treatment to a patient 400km away, as the patient’s clinic did not have the same expertise. The surgeon was able to provide the treatment the patient needed, without travelling 400km. More recent advancements in wireless communication, including 5G, have allowed similar operations 3000km away . Key technological aspects to enable this are low latency, large bandwidth and high reliability.
Another advancement in surgical technology is the use of augmented reality (AR). The surgeon can view the patient’s anatomy using a screen, or AR headset such as the Microsoft HoloLens, to combine what they are seeing with additional information such as the location of a tumour. With advanced medical imaging systems and AR software, certain aspects of anatomy such as blood vessels or nerves can be highlighted in the surgeon’s vision. Some of the advantages of user AR in surgery include rapid identification of targets (like tumours) and critical structures (like nerves), increase efficiency as the surgeon does not have to match what they are seeing to patient scans, and the potential for more accurate surgeries .
Both of these technologies (telepresence robotics and augmented reality) can be used together, to potentially provide accurate treatment by specialised surgeons to patients that are thousands of miles away.
Artificial Intelligence & computer modelling
You might think of artificial intelligence (AI) as being incredibly smart robots like those from the television show Humans. Whilst that type of “general AI” isn’t here yet, “narrow AI” is; these are AI programs created for specific tasks, which could have huge impacts on healthcare in the future. These programs take computer modelling to an extreme; able to analyse huge volumes of data, and produce insights that would take humans a very long time to discover using conventional computing.
One form of AI already being used in healthcare is for pharmaceutical drug discovery. Currently, pharmaceutical drugs take a significant amount of time and money to develop – estimated $2.6 billion to develop a new treatment . During drug development, companies have to create and test variations of the same drug to see which are the most effective with the least side effects; this is one of the most significant costs in the process. AI programs can be designed to reduce the number of drug variations that need to be designed and tested. An article in Nature  provides some good examples of how this is applied. A company called Berg have used an AI program to model cancerous human cells and key physiological aspects of them including lipid, metabolite, enzyme and protein profiles, with the aim of identifying new treatments based on the precise biological cause of the disease. An alternative approach is taken by a company called BenevolentBio, who have designed a program to analyse data from research papers, patents, clinical trials and patient records to identify previously unnoticed links between genes, symptoms, diseases, proteins, tissues and candidate drugs. Some articles are even discussing the potential for a “virtual body” to test drug therapies .
So if you’re interesting in a career in pharmaceuticals, it may be wise to gain some skills in computer science in addition to chemistry and biology.