Brain Awareness Week interviews

In celebration of Brain Awareness Week, we will give you a sneak peek into the lives of the some of the Wyss Center team. 

Dr Giulia Spigoni, Clinical Research Specialist and Project Manager


Could you briefly describe your role?

I am the clinical research specialist and my role here is to help the devices developed by the Wyss Center become more accessible to patients by helping them gain clinical trial approval. Clinical trials are a major step needed to bring medical devices into the real world.

Could you give a brief overview of one of the projects that you’re working on at the moment?

We have two projects that are in a clinical phase. These are InMap and NeuroTin.

InMap is a project to monitor patients with epilepsy in collaboration with the local hospital in Geneva (HUG). The neuronal basis of epileptic activity remains poorly understood. Patients with drug-resistant epilepsy who are being assessed with intracranial EEG electrodes offer a unique window onto this issue. The trial uses an implanted Blackrock NeuroPort multi-electrode array and recording system to reveal the activity of multiple single neurons in the neocortex of these patients. The trial will show proof of concept and provide safety data for patients in Europe.

The other project I mentioned is NeuroTin. The goal of this trial is to find out whether people can reduce tinnitus using neurofeedback. If the results of the clinical trial show that people can successfully reduce tinnitus with neurofeedback, the Wyss Center will develop a minimally invasive device that can be used outside the hospital or research lab. The device would be implanted under the skin of the scalp and provide continuous feedback to a phone app allowing the user to self-regulate the activity of their auditory cortex and reduce their tinnitus, wherever they are.

How did you come to work in this area?

Prior to the Wyss Center I worked for a contract research organisation where I was embedded in a company that was developing heart valve replacements. Heart valves were inserted into the heart via the femoral artery (at the top of the leg) with a wire. I mostly worked on devices in post-market clinical follow-up studies where I was involved in the regulatory and clinical trial approvals.

Before moving into clinical research (and gaining a Masters degree from the University of Geneva in clinical trial management), I was a post-doctoral researcher working on the molecular and genetic aspects of neurodegenerative diseases in pre-clinical studies. I decided after a while that research wasn’t for me. I preferred contact with people rather than working alone in the lab.

You have a great deal of experience across neurodegenerative disease, cardiac disease and now back to neuroscience. Where do you think we will see the biggest advances in the next ten years?

I think that wearables in general are going to see the biggest advances in the next ten years. Wearable neuro-monitoring devices that allow patients to carry on with normal life, that predict what will happen and what the patient should do.

What engineering breakthroughs do you think we will see to improve the current wearable devices?

Here at the Wyss Center we are working on a project that is very promising. This is a new device that would be implanted subcutaneously to monitor the activity of the brain from beneath the scalp. This device is minimally invasive and so does not need a craniotomy. The device would be almost invisible from an aesthetic point of view but could in the future allow epilepsy patients to receive forecasts on their phone about the likelihood of having a seizure.

The other area where I see potential advances is in continuous glucose monitoring for people with diabetes. I think developments in wearable devices that can check glucose readings in real-time and monitor glucose readings over time is another area where we will see developments.

What do you enjoy most about your job?

I enjoy constructive meetings. I like being free to brainstorm and come up with solutions. I also enjoy the preparation of documentation for regulatory and ethics submissions and the interactions with the authorities of because you always have something to learn. I like my job!

 

 

Dr Brenna Argall, Visiting Fellow


Could you briefly describe your role?

I’m a roboticist. I’m a visiting research fellow at the Wyss Center, for one year, and I’m also an Associate Professor at Northwestern University in Chicago.

My role here is to intersect some of the Wyss Center’s work with the research going on back in my lab in the US. My work involves adding robotics autonomy to assistive machines. The Wyss Center has a lot of expertise in neuroscience and in neural interfaces for clinical outcomes. What has been explored less is the use of brain information to control assistive machines that have autonomy. What I mean by autonomy is the type of technology that we see in driverless cars. That’s exactly what my lab in Chicago researches. We work with robotic wheelchairs and robotic arms, all different sorts of assistive machines that were designed to be operated just by humans, but because of the severity of the human’s motor impairment it’s difficult for them to operate. We add on sensors, we add on artificial intelligence and we make it so that the machine can also partly control itself and share control with the human to make it easier. How that control sharing works depends a lot on the interface that the human is using to operate the machine. So far in my lab we’ve used exclusively interfaces that are available commercially to operate powered wheelchairs. These tend to be joysticks, switches mounted into the headrest of the wheelchair and sip-and-puff which is operated through respiration.

Where do you see the biggest developments happening in the next ten years?

Neural interfaces are still at the cutting-edge frontier of control technology – this is going to be a big game changer in the field in the next ten years. We are waiting for these kind of interfaces to get out of the labs, get out of translational research centers like the Wyss Center and start becoming commercially available. The types of information that you get from advanced neural interfaces is very different from the interfaces that are commercially available today. The types of commercially available control interfaces that my lab currently uses are on the one hand much more limited than neural interfaces – you get much less information – but the information that you do get is much less ambiguous. If you touch a switch the only thing that happens is that you switch on or off but if you are interpreting brain signals, there is a lot of variability. The right way to design machine autonomy that can interface with brain information is a big unknown. And that’s what we are trying to explore here with this collaboration.

What would you say is currently the biggest limitation holding back the field?

I think that right now the interpretation of brain signals is a huge limiting factor as is the stability of those signals over time. It’s the same challenges that you have with neural interfaces when you are trying to use them for anything. The fact that the signal changes over time, the fact that if the interface is not implanted under the skull or at least under the scalp muscles, you have a huge problem with the signal-to-noise ratio because the electrical signals from the scalp muscles actually mask the electrical signals from the brain.

When you’re trying to use these signals to communicate or to operate things like a computer, you don’t have physical motion. When you use them to operate a robot, these signals are going to move something physical in the world so it’s potentially dangerous, especially if what you’re controlling is carrying a person, like a wheelchair. We need these signals to be unambiguously interpreted. Robotics autonomy can help make that situation safer.

Do you mean like putting a kill switch in? Or limiting the reach of a robot arm so it can’t hit a person?

Yes, or you can detect obstacles and avoid them. But autonomy can also help to achieve a task. So if the robot arm knows that there are objects on a table then it can help provide the input needed to pick them up.

One of the things we do when we add robotic autonomy is called goal inference. We are trying to figure out what the human is trying to do. That becomes harder when there is ambiguity in the control interface. If you get the goal wrong, you will provide assistance towards achieving the wrong task which will frustrate the user.

So, your job would be made a lot easier if you had a really good idea of what the human wanted to do before they tried to do it.

Absolutely, that would help.

So could you have a button that narrowed down the options? Like a button for reaching for an object or typing on a keyboard to help the autonomy assist in the right way?

Yes, there are researchers who work with humans explicitly providing their goal but there’s a question of what the interface would actually look like. The world around is changing all the time so wouldn’t be practical to just have a ‘reach for cup’ button. You could have a touch screen so you could touch the object you want to reach for. But if you are talking about people with severe motor impairments, how do they interact with the screen? Maybe they have an eye tracker but there can be issues with these if they fixate on the wrong area or their gaze is misinterpreted. So it’s not that there aren’t other solutions out there, but there are no pre-packaged solutions that exist today.

What do you think is the biggest population of people that could benefit from your work?

I’d say in my work, it would be anyone who struggles to operate an assistive machine. So, this could be any number of motor impairments or cognitive impairments. Spinal cord injury is possibly the biggest population, but this work could also help people with degenerative conditions like amyotrophic lateral sclerosis (ALS), muscular dystrophy and cerebral palsy – basically anyone who has an issue between their mind knowing what they want to do but not being able to express it with their body in order to operate an assistive machine.

For neural interfaces specifically, the more severe the motor impairment, the more that patient population would be impacted. There are people who are locked-in, for example, who cannot operate any of the commercial interfaces out there and they are the ones who will benefit the most from neural interface-based shared control of an assistive machine. This is still somewhat unchartered territory and part of what we are doing during my stay here is understanding how to interpret the neural signals in order to interface with robotics.

We have some guesses because we’ve done it with other interfaces but the characterisation of the information we get from neural interfaces is fundamentally different. How to handle that is what we are now exploring.

What led you to work in this fascinating area – was it a childhood ambition?

I had a convoluted path. I was a math major but always had an interest in medicine. In my third year at university I saw a presentation given by an applied maths professor that talked about injecting directly into the blood stream nanoscale robots with little alligator mouths that would chew up blood clots. I thought that’s awesome – that’s the kind of thing I want to do! I worked at a National Institutes of Health neuroscience lab where I helped interpret fMRI data. I decided then that I wanted to focus on research and so went back to Carnegie-Mellon University – where I did my undergrad – because they had loads of robots. When I got there, I talked to the professor that worked on really small robots and he told me “we don’t have nanoscale robots yet – we have microscale but not nanoscale” so the professor who gave that talk was either talking about simulation, or theory, and I had missed it. I created my whole life trajectory based on something that didn’t exist! In the end I joined a lab that did robot soccer. There’s actually an international RoboCup soccer league that happens every year. My PhD was all about robot learning and robot autonomy. I was using big Segway robots. I decided then that my medical ambitions would be put to one side. I did my PostDoc at EPFL, here in Switzerland. I was then offered the position at Northwestern in rehabilitation robotics in partnership with the Rehabilitation Institute of Chicago which is the number one rehabilitation hospital in the US. That’s when I started working on shared robotics control with humans.

What do you enjoy most about your job?

To be honest one of the things that I really enjoy about my job is the internationality of it.

It gives different perspectives. I also find that science and engineering levels the field, a bit like music and the arts do. It provides a common language between people from very different backgrounds – I think it’s that I like the most.

 

Dr. Tracy Laabs, Deputy Director 

Could you briefly describe your role at the Wyss Center?

I am the Deputy Director of the Wyss Center so, as part of the leadership team, I help the Director develop and implement policies and strategies that enable us to fulfil our mission of accelerating development of neurotechnology for human benefit.

We have five major programs and around 20 projects that fit within these program areas.

One of our major programs, which includes around six projects, is the movement restoration program. The projects in this program are all focused on the goal of restoring movement and communication in people with paralysis, these could be people with amyotrophic lateral sclerosis (ALS) or brainstem stroke, or other disorders. The program has multiple facets. It has neuroscience components to understand how the brain encodes movement and how we can harness its innate plasticity and engineering components to develop technology to read signals from the brain, decode those signals more seamlessly and in real-time and control algorithms to enable a person to, for example, move a cursor or a robot arm just by thinking about it.  Ultimately, we’d like to actually connect the intent to move to technology in a person’s own arm and restore movement.

As part of this program we have also established a number of clinical projects to build the knowledge and team to be able to support patients who are paralyzed to become more independent. Working with clinicians early in technology development projects is imperative to optimize design and understand the clinical limitations and implications of our technology.

These are big ambitious, exciting projects – how did you get involved in this area?

Well, I have been interested in neuroprosthetics since my PhD when I studied spinal cord injury and traumatic brain injury – more from the molecular standpoint – but I was also interested in different strategies to help people with nervous system injuries.

So you have a neuro-related PhD. Can you talk about your work prior to the Wyss Center?

After my PhD, I moved to the funding side of high risk science and technology projects, working for the US government as a contractor.  There I was fortunate to work with many academic, government and industrial labs to drive innovation in neuroscience and neurotechnology.

Can you talk about any of the projects that you worked on while you were there?

Sure, some of the projects I was working on there, aimed to develop a better understanding of how the brain and physiological systems react to stress and high demand training to enable better strategies and technologies for combatting and preventing stress related conditions.  Working at the intersection of neuroscience and human performance was a lot of fun and helping people understand how to use neuroscientific strategies to cope with a range of situations was very rewarding.  Another project I worked on aimed to develop neural interface systems to read from multiple subnetworks of the brain and then use this information to modulate the brain in a very sophisticated way to treat psychiatric disorders like depression and anxiety.

Do you know what the outcomes of those projects are? Did they result in any technology becoming broadly available?

Not broadly available yet – these projects are still ongoing. The types of technology that they work on there – and that we work on here – are extremely ambitious and take a great deal of time and resources, both financial and intellectual, to develop.

Looking at the disorders you have mentioned: depression, anxiety, paralysis and the inability to communicate, is there a particular group of people that you think would benefit most from advances in neurotechnology in the next ten years?

I think that if you could find a technology that would work for a disorder as complex as depression, that could lead to enormous benefits. I think that neuropsychiatric illness is probably the next frontier and neurophsyicatric patients could be the group that would benefit most from advances in neurotechnology.

Are you saying that because it is a large patient population and so many people could benefit or because, if there was a solution, these people would experience the greatest boost in quality of life?

That’s a difficult question but I think that people who suffer from disorders like intractable depression for example, also suffer from many secondary effects on their physical health in addition to their illness.  The first line of defence for depression is still with pharmaceuticals, many of which have terrible side effects impacting daily life to the point where some patients would rather face the depression.  The promise of neurotechnology is the ability to treat these disorders in a much more precise and targeted way.

So are you saying that by fixing depression you could potentially also fix secondary disorders?

It remains to be seen, but yes, I believe so.

So if this could have such a big impact, what are the limitations to creating a device for depression? 

Well it’s an extremely complex disease and difficult to study with implantable neurotechnology. For implantable technologies for epilepsy or Parkinson’s disease, for example, the patients needed to test the new device are often already candidates for brain surgery.  Ethically it is more difficult to recommend a procedure like brain surgery for someone with depression who would not otherwise require surgery. This makes developing implantable devices for depression more challenging, but not impossible.  In fact, there have been some studies to use deep brain stimulation for depression.  The results are promising for a subset of people, and there is hope for the future of this technology in depression. I believe once we understand more about individual differences in depression through an understanding of the brain circuits involved and how they change over time, we miniaturize the technology, and we understand more about the long-term effects of implantable technology, developing such a device should become easier.

Do you have a feeling of where we are likely to see the biggest advances in the types of neurotechnologies you work on in the next ten years?

I think the biggest advances will be when the technology is at a state where it is safe and user friendly enough to be available for use in the home. This would require a fully implantable brain sensing hardware with a wireless connection to an external unit. I believe this is achievable in the next ten years.

I also think that in the next ten years we’ll see advances in computing technology, decoding algorithms and low power electronics which will really have a huge part to play in the development of technologies that will be able to work 24/7.

What do you enjoy most about working in an organisation at the frontiers of a field like neurotechnology?

That’s a good question! I think I like most the perpetual learning. Although I am a neuroscientist by training, I have learned over the past ten years, that there’s an enormous amount of information that can be transferred between disciplines. Putting physicists or electrical engineers together with biologists and clinicians is extremely exciting to me. I love when people teach each other things for the benefit of a technology. There’s so much hope in developing these types of technologies and interacting with patients who are eager to benefit is also very rewarding. I also enjoy communicating what we do to the public. I like explaining what is the state-of-the-art and what we are optimistic about for the future.

 

Sébastien Pernecker, Embedded Software Engineer

BAW 2018

Could you briefly describe your role at the Wyss Center?

The goal of my job is to define, write and test software that will be embedded in medical devices in accordance with medical device standards.

I work mainly on two Wyss Center projects. The first, called Neurocomm, aims to restore movement in paralysed people. It involves a microelectrode array implanted in the brain that can read the person’s intention to move. The goal is to allow people to regain movement but also their independence.

The second, called Epios, involves electrodes placed on the surface of the skull to monitor brain activity for epilepsy patients.

I am working to solve the challenge of how we can extract the brain signals from these implanted devices and how we can efficiently process the data on a small portable device while reducing power consumption.

What led you to work in engineering and neurotech. Was it a childhood ambition?

I always wanted to be a scientist, but when I had to choose my career I couldn’t decide whether to become a doctor or an engineer. In the end I chose to become an engineer but one that works in the field of medicine. After finishing my engineering degree at EPFL I was tempted to work in the renewable energy industry, but I finally decided to settle on medical engineering.

Before joining the Wyss Center, I was working in a more general industrial setting, but my personal ambitions and interest in medicine led me to focus more on medical device development. I wanted to work in an area that would have a positive impact on society and that would benefit peoples’ lives. I wanted to do something that would help people.

What do you enjoy most about working in a neurotechnology organization?

I appreciate the different backgrounds at the Wyss Center – there are people from academia and from industry. This results in interactions and a way of working that you don’t find in industry. I also like the level of communication between people here. People here really take the time to understand the challenges that they are addressing. We don’t go straight to development. We focus a lot on what the final user really needs. I have learned a lot about what paralysed people actually want from a device. I had previously thought that paralysed people want to walk. I realise now that this is one goal, but it is not necessarily the major goal for many of them. In fact, people would like to be able to wash themselves and to regain some dignity.

Where do you think we’ll see the biggest advances in neurotechnology in the next ten years?

I think electronic components will become ever smaller and this will allow us to incorporate more electronics inside the body. I also think that the field of artificial intelligence (AI) can give us valuable insights in how to process data, identify trends and analyse brain signals. I believe AI could really help us in our goal to restore movement and solve other nervous system disorders.

 

Stéphanie Trznadel, Field Clinical Research Associate

BAW 2018

Could you briefly describe your role at the Wyss Center?

I act as the interface between the research participants and the researchers. I accompany participants from the beginning to the end of a study. I work with clinicians at local hospitals to identify suitable participants. I meet with them – if they cannot move, I go to their homes – and I present the research to them and talk them through what it would mean for them to take part in a study.

Do you find that people are open and interested in helping with research or are they nervous?

They are not nervous at all, they are generally really, really eager to help. Even if the study is not going to benefit them directly, they are eager to help so that the study can benefit other people.

For example, we are developing a communication device for locked in people and for this study we are recruiting people with ALS (amyotrophic lateral sclerosis), but who may not be paralysed themselves. During the recruitment for this, I have found people to be happy to help, not for themselves, but for other people who might need the device more than them.

Had you worked with patients before?

No, this is my first time.

How are you finding it? Can it be emotionally difficult at times?

On the contrary actually. It’s very exciting because these people are enthusiastic and motivated. They are more excited than a healthy volunteer because they are helping people and they might be helping themselves – their future selves. They are really interested in everything we are doing at the Wyss Center. It is nice to see people with stars in their eyes when you tell them what you are doing. The more study participants I meet, the more I realise that we need each other. They need us, and we need them.

What led you to work in neuroscience and neurotech. Was it a childhood ambition?

I thought psychology was really interesting, so I studied this for my first degree. Towards the end of the degree I took a course in neuropsychology, and became interested in the biological side of behaviour. I then did another bachelor’s degree in behavioural biology which involved learning a lot about animal behaviour. After that I decided I wanted to go back to studying humans, so I did master’s degree in neuroscience in Geneva.

This led me to work as a researcher for four years at CISA, the Swiss Center for Affective Sciences, where I was part of an interdisciplinary team of psychologists and philosophers as well as neuroscience researchers. After a while I decided that I wanted to work in applied research, to work in a place where research concepts could be used to help people. This is how I came to work at the Wyss Center.

Where do you think we’ll see the biggest advances in neurotechnology in the next ten years?

I think that brain computer interfaces (BCIs) are evolving really quickly and there are more and more applications for them. Whatever the application is, I see a way for a BCI to be involved, from improving the way the home works to solving nervous system disorders. Think about how the world is changing, everything is getting more technological, more automatic, more connected. I think that BCIs are going to be very important in this revolution.

 

Dr. Jorge Morales, Implantable Materials Engineer

BAW 2017

Tell us about your role at the Wyss Center

I work on projects that are developing different neural technologies for human benefit. Our goal at the Wyss Center is to accelerate technological development from the initial concept or idea to the final medical device, and there are important decisions the must be made with regard to the materials used. The choice of material is crucial both in the fabrication of components and their integration into the final device to ensure functionality and short development times. My role as implantable materials engineer is to help make those choices.

What led you to work in engineering and neurotech?

I’ve been always interested in science and engineering and how they can be applied to solve real world problems. My background is in materials science – I gained my engineering degree at Kyushu University in Japan and my PhD from the Université Grenoble Alpes where I worked on biocompatible materials for the encapsulation of micro medical devices. Working on brain-computer and neural interfaces is a natural consequence of my childhood ambition to create or build things that can help people in some way. Creating implantable neurodevices is a great way to achieve this!

You tackle the engineering challenges of implanting neurotech in the human body. What do you think is the biggest challenge of long-term implantation?

One big challenge is to simulate the long-term aging and decay of implantable devices, especially for miniaturized neurotech devices. We are building an accelerated ageing system so that we can understand what will happen to electrodes that are implanted in the body for long periods of time. There are presently no guidelines or accepted standards to infer the long-term behavior of active implantable medical devices from accelerated aging tests lasting few weeks. One of our goals at the Wyss Center is to contribute to the development of test protocols to assess the long-term aging of neurotech devices and materials.

You are working on multiple neurotech devices. Do you have a favourite? 

Several Wyss Center projects require novel packaging technologies to protect the internal microelectronic components from the rigors of being exposed to body fluids over long periods of time. These devices range from an implantable brain radio that detects the movement intentions of a paralyzed person, to implanted infrared sensors that measure blood flow and allow completely locked in people to respond to questions with yes and no answers. Implantable devices like these all need to be encapsulated in materials that will be accepted by the body, that will minimize scarring and that will be leak and corrosion-proof. Traditional titanium canisters are reliable and constitute a mature technology, but they have limitations in wireless communication and power charging. As neurotech devices get smaller, more powerful and able to handle larger amounts of data, alternative packaging materials become necessary. Ceramic packaging is one such alternative and I am excited to be incorporating such novel packaging technology in Wyss Center neurotech.

 

Contact

Contact us