Halo Neuroscience is a company that makes neurostimulation devices which help people develop muscle memory faster. Our first two products, the Halo Sport and Halo Sport 2, accomplish this by targeting your brain's motor cortex with a small electrical current. The device (a set of headphones) are worn, for the most part, like any other pair of headphones. Then users pair them with their smartphone to control their neurostimulation sessions (and to play music, of course). I joined the team as product designer prior to the release of the Halo Sport 2 in 2019, and have designed (or redesigned) major portions of the iOS and Android apps.
The Halo Sport app
Our design team currently includes a director that oversees marketing and product design, and one marketing designer. I typically own and execute the following activities before working directly with our developers to see them implemented:
Halo makes a brain stimulator for consumers, and brain stimulation is a foreign concept to most people (to say the least). With this challenge in mind, I attempt to approach design with a goal of improving clarity for, and inspiring, our users.
Our design and development teams work closely together to iterate, under the principle that iteration produces better work. We also seek to only iterate when it makes business sense (so as to not iterate indefinitely).
Most importantly, my team and I act as advocates for our users. We want to remember that nobody is a power user like us.
Below, you will find samples of the work I've done at Halo, broken out by various disciplines with additional context around my process or objectives.
When I started redesigning the home screen, which became known as our "stim screen," I started with some quick sketching and wireframing, which would be discussed on weekly basis with developers before moving forward. The goals for our new stim screen, in line with our overall design approach, were to reduce the time and cognitive load it took for a user to begin neuropriming, to flatten the navigation, provide more contextual state details, and find ways to handle more tasks automatically.
Initially I presented some concepts that gave users a dashboard of content on the home screen of the app (A, B, and C below). At the top of the screen, we’d give access to select a session type, and start it. Beneath access to stim control, we’d surface a feed of information that appeared in more granular detail in other sections of the app. This included showing tips for use, recent activity, and recommended content.
We were concerned about the value of this additional content — we weren’t certain it was worth taking prominence until we could show users were deeply engaging with it. And given that we knew users were still mostly interested in our product to focus on a stim session quickly, we pushed to a simpler home screen.
After stripping out the added content, I experimented with various interaction types for selecting a session and the amplitude (some of these are shown in D, E, and F below). I built functional prototypes of these camera-like carousels so we go hands-on with the experience, at which point it became pretty clear to testers that it wouldn’t be a comfortable experience. We also learned here that there wasn’t need to expose the amplitude up front all the time, given that once a user was comfortable with a stim’s strength, they’d generally keep it there.
The direction we landed on included a bottom sheet for selecting the session type, with a big, simple start button in the middle. Once the selection was started, the screen would transition to an "active stim" state. The dot-grid box shown here was a placeholder for an animated graphic that would be shown during the stim.
In tandem with screen layout exploration, I was building out user flows and user flow maps. Given the nature of Bluetooth connectivity, and headset-to-head contact, we had to capture as many edge cases as possible in the flow, with an emphasis on graceful failures and automatically resuming a session whenever possible.
At a later point, I designed a new settings screen for the app, with an objective to collect all the information about a user's device, how to use the device, configuring their app in one place. We utilized online usability testing to measure the performance of multiple variations before landing on the one we implemented.
Compared to some of the earlier wireframes, we landed on a simple, minimal aesthetic for our stim screen which respected and extended Halo's existing visual language. The mostly dark color palette was brought to life with pops of our "Halo green." An animated visualization looped around the session countdown timer (our "stim viz" — which is discussed in considerably more detail below).
A stim control bar, resembling a media player's control bar, was introduced above the tab bar during an active session to enable users to move freely throughout the app while maintaining control of the session's timer and strength.
The settings screen was broken up into three main blocks. The top of the screen introduced a "My Device" card with up-to-date information about the user's personal Halo Sport device, include firmware information. The support block features user guides, FAQs, and contact info for support. The third block includes account details and settings like email address and reminders.
Shortly after launching our new settings screen, we also revamped our firmware update flow and design. As with our stim screen experience, we labored to make firmware updates an excellent experience — where users would be given relevant details about the process, could navigate around the app while the update was occurring, as well as picking up the process where they left off (if they were disconnected from their headset at any point) and a little looping animation on the home tab.
In line with promoting clarity for the Halo Sport product experience, our onboarding and user guides play a key role in user success. Here are a number of things that our users have to learn while using our brain stimulator (in no particular order):
To communicate this information in a way that could remain flexible in the future, we opted to use videos and copy that could be evolved to better serve users as we continued to learn from the field. We produced videos in-house and plugged them into a basic pagination flow alongside instructions (and yes, I'm the one in the videos too). Depending on how the user accesses a guide, they may see active stim details (like in the screenshots below) if they were attempting to troubleshoot a poor contact experience.
Given that users may not be able to start a neurostimulation session because they don't have a paired headset, need to charge a headset, update firmware, or insert a primer band, I introduced iconographic illustrations that were displayed on the stim screen when need be. These would be accompanied by additional instructional copy.
Our signature green highlight was intended to draw attention to the part of the illustration that most closely related to the purpose of the particular instruction.
I also repurposed variations of these illustrations for some onboarding illustrations and our firmware animation.
A portion of the redesign was spent on exploring alternative options for our neurostimulation visualization (aka “stim viz”). And rather than using OpenGL, which is what had been used historically, and leaning heavily on our developers to get it polished, I turned to Lottie: a library that can parse After Effects compositions exported as JSONs by a plugin called Bodymovin, and render them natively on Android and iOS. Using Lottie for the stim viz gave me, as the designer, an option to have more control over the animation with a reasonably lightweight file. This wasn’t to suggest our devs couldn’t code up amazing, dynamic visualizations, because they could have totally done that. What it did do was mostly free us up from having to do much of the the back-and-forth work often required with getting any kind of in-app animation dialed in.
While I was familiar with creating small animated icons and visualizations with Lottie, the devs were new to the tool. That said, I hadn't seen anyone try to have a Lottie file do what we were going to do with it, so we just went for it.
Here’s what we wanted to accomplish with the Lottie animation:
After clarifying what Lottie was capable of rendering on both iOS and Android, I moved quickly from sketches, to vector paths, to testing short animated sequences in After Effects.
In terms of what we wanted to communicate with the motion, we considered a spectrum that moved from "purely technical charts" on one side, to "just neat to look at" on the other. The chart approach would follow the gradual improvement in a brain’s plasticity during a stim session, and the gradual tapering off over the subsequent 60 minute period. The problem with the chart is that it wasn’t going to move very quickly, so it would need to be partnered with some other movement to help it feel alive. Given that most users don’t stare at their phone’s screen during their training sessions, we shifted along our spectrum towards the more abstracted end in hopes that the brief moments where they were looking at the app would invoke a sense of, "Sweet! Something is happening to my motor cortex!”
What did we want to abstract, then? When somebody’s brain is in a "hyperplastic" state, new neural pathways are being created more rapidly. Electrical signals are moving along axons. Neurons are firing together, and they’re wiring together. So we considered overlapping waveforms, broken lines that would connect, and highlights that could communicate electrical signals.
We used lottiefiles.com alongside the Android Lottie preview app a ton in the process to fine tune and test loop sequences. We followed this up with documentation of which sequences should appear when, and whether they were a one-time playback or a loop.
We ended up going with the option in the bottom right of the video below, and were able to get the amplitude to change the appearance (though the effect is different on iOS and Android due to slight differences in support).
I use regularly create prototypes when trying to get a truer sense of user interaction of a design. Sometimes these are for internal sanity checks, or to help our developers understand a design better. Other times I will use prototyping in testing with external users to get a sense of the design's performance.
This was a SwiftUI prototype I created to see if less copy, and bigger videos would perform better in our user guides.
This prototype was built to help shape discussions around partnerships we had with some physical fitness companies.
This prototype was intended to explore what interactions and design may look like if we re-imagined user engagement in managing their neurostimulation performance goals with quick sentiment feedback and notes at a per-session level.
I produced screenshots and app icons for the App Store and Google Play.