I’ve tracked personal data like weight, heart rate, productivity, and sleep for years now. As a data-junkie, it’s been a fun resource for experiments. At first, it took a lot of effort, and I gave up on a handful of datasets. Now, I spend less than a workday per year (total) working on what’s become a very rich and useful collection.
Collecting rich data about my life has given me an objective way to look at my past. It’s also become a significant part of how I make decisions about the future (maybe I’ll cover that in another post). If you’re curious about self-tracking, but you’re not sure if you have the energy to keep it up, maybe this tip will make the difference for you too.
I’ve been tracking my weight for close to eight years. At first, I was reporting by actively entering my weight into an app called WeightBot. A few years ago, I switched to passive reporting using a smart scale (Withings Body Cardio) that automatically logs my weight when I step on it.
You can see the difference in detail on the chart below (numbers removed for privacy).
When I reviewed the data, it was a revelation. The richer and more objective passive portion of the chart was so rich that it read like a journal. What’s more, it took less effort to generate than the active one.
The passive data tells a story. I remember the first dip when I left my job to live in Montreal and started running every day. I can re-live the gradual climb — my stress was growing as I was starting a new business. I can take pride in the recovery as work began to stabilize, and I could start building a team.
The active data is sporadic and not even as clear as my memory of that period. There are also some hints that it was biased, which I’ll cover next.
The total range of motion here is relatively broad (≈16% of body weight), so I’m sure I would have noticed these changes without tracking them. Still, it’s been incredibly valuable to be able to look back at the past objectively. Reflecting on a lot of objective data like this has impacted some big decisions in my life.
This insight drove me to rebuild my tracking stack. I became obsessed with removing actively tracked data and finding passive approaches to tracking anything I could.
I think of active tracking as “tracking that takes real-time (or near real-time) effort.” This is in contrast to passive tracking, which happens without intervention.
The first section of the chart is active because I had to decide to log my weight in WeightBot each day. Though that process was easy, I fell out of the habit for months at a time. What’s more, those weeks and months when I was least motivated to measure my weight were likely the ones I was gaining the most.
Even if your measurements are accurate, your choice of when to measure can bias the data. If you look closely at the chart’s active section, you’ll notice that the areas where I tracked most frequently were the areas of sharpest decline. I was more likely to measure my weight when I was losing it, and I might have stopped measuring entirely when I was at my actual maximum.
Active tracking can be useful for changing your behaviour, or for short-term experiments. It’s a great way to know what happened during some period, but it’s a poor sample of what the data looked like before and after. Anyone who’s kept a food journal will know this effect. Simply writing down everything you eat turns out to be a great way to eat less junk for a week or two.
Sometimes there is no good option for passively tracking something. I’ve found this with productivity tracking. There are tools that try to track and classify your behaviour automatically, but I’ve found them too inaccurate to be useful.
I’ve tried a few of the personal tracking options available, and apps like Toggl or Timely for my work. Eventually, I landed on Timing for the mix of pricing, features, and a local-only option.
Timing creates a timeline of calendar events and computer usage (e.g., the title of each active window). I still have to bucket this data by project and activity manually, but it’s a fairly objective process since I’m working from passive data. I like to think of this as “assisted” passive tracking.
You can see a chart of thousands of productive hours below (slightly obfuscated for privacy). The different shades represent different projects and activities.
This chart is grouped by day, but I have roughly 15-minute precision across thousands of hours. I know when I’m most productive, when I tend to overwork, and which activities take up the largest share of my time. If you look carefully, you’ll even see that I took my first real vacation in years near the end of the timeline (hah!).
I’ve set a reminder every week to categorize my data. It’s a helpful mini-review to see how I spent my last week and decide how I’d like to spend the next one. At times I’ll become too busy and put off classifying data for weeks. Still, the app continues to track passively, and the resolution of that data is more than enough to maintain high accuracy when I’m ready to classify it again.
My total time spent in the app over the past year is only roughly 8 hours. Besides activities like journaling and goal-setting, this is the most time-intensive part of my dataset. Still, it’s provided me with some of the most valuable insights.
The trick with assisted passive data is to find a process that will keep reviews objective and minimize the cost of missing one. If you can passively track data in a way that you can still accurately classify it within a month or so of recording, you’ll have a much higher chance of keeping it up.
After years of self-tracking, I’ve found that the more passive a tracking process is, the more likely you are to derive value from it. Early on, it’s easy to get excited about your data and put in the work to keep active processes going, but as your stack matures, you realize how much more value there is in your passive measurements.
Here are some guidelines to follow:
If you’re just getting started with tracking personal data, I hope you find this helpful. Trust me, it’s well worth exploring!
When we were developing Digital Directories at Mappedin, we faced a unique design challenge. It’s what I like to call a problem of “fuzzy constraints” (more on that later).
Over the years, we had built several custom touchscreens in shopping centres to help visitors find what they were looking for on a map. These screens had various sizes, heights, and resolutions, and we designed each interface so that it would work under those unique conditions.
The first generation of these interfaces came at a time when touchscreens were rare in malls. Visitors required a great deal of walkthrough and explanation. Later, as touch directories became commonplace, our designs took advantage of visitors’ familiarity with frequent interactions like search.
We knew we had to update our first-generation designs to keep them modern, but each screen had its quirks to design around. Developing so many new interfaces while meeting the demands of new business would be impossible. The solution would have to be one format and set of components that would serve many unique challenges.
One of the most challenging concerns was that these interfaces were installed on various types of hardware at different heights, sizes, and resolutions. Each piece of hardware had its own quirks and the existing designs reflected that. It seemed impossible that we might find one format and set of components to serve all of the many design challenges.
I like to think of this type of challenge as a problem of fuzzy constraints. There were no clear rules to follow that ensured a solution would work in every scenario. With so much of our effort driven by intuition, it was hard to communicate why a design would work (or not work).
A designer might spend hours on a new feature only to realize in testing that an essential element was out of reach or less visible on one obscure device in some particular venue. This solution wasn’t going to scale, and we were quickly reaching our breaking point.
When the constraints you’re working with are too “fuzzy,” the goal is to define them.
In many cases, we can bound constraints in just one direction. For example, there is a minimum scale at which letters will be legible for the vision impaired, but there is no maximum. The larger the text, the more readable it is (in most settings).
But, when designing a big interface, you’re managing many different types of users and displays. What’s out of view in one scenario could be just right in another. The ideal guidelines exist within a fuzzy cloud of boundaries and dependencies, and it’s often difficult to know where to start defining them.
As with most design problems, the best place to start was in the field. We studied a range of different users on the many setups we had. On careful study, we noticed a theme developing in our research. There was one standard limitation in large interface design: visibility.
When an interface is scaled up, you can only focus on a small portion of it at one time. The sensation of using a screen so large that it extends beyond your central area of vision can feel jarring and unfamiliar, and a change or alert in one area of the screen can go entirely unnoticed by a user depending on their size and position.
As designers, we had to think more like choreographers: attracting the user’s focus and directing it fluidly around the screen. With this in mind, we set out to define meaningful boundaries that resolved the issue at the core of our research: visibility.
Human vision seems complex, but there’s an easy way to think about it. The visible region comes in two parts: one resembles a pair of binoculars, the other a target.
Central(ish) Vision — The two rings in the centre of the target account for your “central” to “near peripheral” vision, where elements tend to be in focus.
Peripheral Vision — The outer binocular shape is your “far peripheral” vision. This area is not great at detecting changes in colour, but it’s fantastic for detecting motion.
Side Note — I heard this referenced in a talk once in a way that’s made it simple to remember: think in terms of evolution — it’s more important to see something is coming at you than to know whether it’s a lion or a tiger.
We’re used to designing interfaces contained entirely by the central area of vision. For example, if you view your phone from a comfortable distance, the whole screen will be in your central vision.
On large screens, we want to present essential elements in the user’s central vision. We can still make use of the other space, but if we have to direct a user’s attention to a new area of the screen, we should use motion to introduce it.
To build our model, we projected these two main fields of view onto many of our different screens from standard viewing angles and distances. You’ll see two examples above. Still, we considered users many other users, including those with limited mobility (e.g., users in wheelchairs) and users who would be standing closer to the screen due to vision impairment. This process revealed some of the most extreme cases that we’d have to consider in our model.
When we expanded all of the many possible boundaries with visibility, we started to notice a pattern: some areas of the screen were particularly risky for displaying information and functionality. For example, the very top of the screen was almost entirely out of sight for some users viewing the screen at a comfortable distance — especially those wearing baseball caps!
With that, we had the constraints we needed to build a model. Inspecting the many different models, we sectioned the screen vertically into six equal zones. We identified each zone’s limitations in two scenarios: one for standing users and another for seated users. These zones would become our “Visibility Model” and would inform all of our design exploration and reviews.
As we established our guidelines, we thought carefully about how each section of the screen would operate under different conditions. For example, while the top of the screen was out of sight during regular operation, it turned out to be a great place to put information like the time and opening hours, which could be viewed by many visitors from a distance.
Out of this process came a simple set of guidelines that would keep our interfaces in check without an overload of working knowledge.
By considering the diversity of visitors and screens, we developed a set of components that could easily be re-arranged to suit everyone’s needs. When we encountered a new screen format, we could simply check it against our model and adjust to fit if needed.
Finally, we could use a simple pair of overlays in all of our interface design projects representing each section of the screen and its purpose. This tool made it simple to implement, review, and communicate the model. We could ensure that directories kept important cues and elements comfortably within view for any user. Those outside these areas would be larger, introduced with movement, and not required for interaction.
It’s amazing what a good set of constraints can do for your design collaboration and reviews. Knowing the boundaries of a problem can be the difference between a design that fails in testing, and one that passes with clear and meaningful justification.
JUJU was founded by artists who recognized a gap in the market for live online entertainment.
On one side of that gap, you had short-form events on platforms like Instagram. These streams were not curated, the quality was inconsistent, and the only way to earn from them was with a tip jar. Many artists aren’t comfortable “passing around the hat” to collect donations and end up giving away their content for free.
On the other side, you had long-form concerts with high quality and high ticket prices. These are expensive productions. The risk of investing in a show like this and having it flop is enough to keep most independent artists at bay.
Our goal was to deliver curated live content with concert quality, but with a shorter format and lower price. Most artists will perform a JUJU set of four or five songs for just a few dollars per ticket.
We knew that people were craving concert-quality streaming, so we invested a lot of effort into testing several streaming partners and platforms. It turns out that a little effort and preparation will get you to ticket-quality content. The requirements for the artists are minimal. Many already have access to the necessary home studio gear and a smartphone with a decent camera.
You can hear the quality in the promo video below. You might be surprised to find out that the audio in this clip was pulled directly from the live stream without any post-production. It’s what the audience experienced live.
As I said, we launched quickly. Our first show was within a month of being presented with the idea. With this speed came problems we had to solve in a hurry.
One such issue was unpredictable behaviour with the streaming player we selected. If the stream didn’t start until after the player was loaded, the video would occasionally fail to start entirely. Not good.
The solution we came up with was to give the fans access to a sort of “lobby” page with their tickets. Fans could visit the lobby page in the days leading up to the show, but it would display a show poster beside the comment feed instead of the live stream player. On the day of the event, the artist would begin streaming, and we’d watch as the stream buffered for a moment before flipping a switch to replace the poster with the live stream player for everyone waiting to watch. It was an odd fix, but it meant that fans would never be presented with the player before the video started. Crisis averted.
We announced the event early one morning a few days before the show. I like to do what customers will do as a sort of launch day ritual, so I bought a ticket to the show. I was satisfied to see that I landed safely on to the “lobby” page, but shortly after I saw a comment pop up in the comment feed. That couldn’t be right — the show wasn’t about going to start for another three days.
I panicked. Were we not clear that this is a live event? Maybe landing on this “lobby” page was confusing for the audience. How many fans might be waiting right now for the show to start!?
But, as comment after comment started to pour in and I read through, I saw that these fans did understand the premise and were using the lobby page as we never expected. They were leaving personal notes for the artist before the performance. Notes like:
So excited! The last time I saw you in concert, your children were small and wanted to sit beside you on stage!
Last year we celebrated our 11th wedding anniversary by attending the Moon vs Sun concert in Montreal. To be able to celebrate our 12th anniversary by watching your online concert…it is hard to ignore that a tradition is being born!
Through the rest of the day, I left the page open on one monitor and watched as fans left incredibly personal and heartfelt notes to the artist.
On the day of the event, we sat in the lobby with the audience as the anticipation grew. Messages were now hitting the feed as quickly as we could read them. For the first time since the COVID-induced lockdown, we were sharing a moment with hundreds of other people. One comment captured the moment beautifully:
I love this feeling — same feeling when you sit in the audience before a CK show starts, [watching] the beautiful grand piano on the stage and the awesome lighting, and then the house lights dim…
Incredibly, the lobby was the result of a last-minute fix to an odd bug. It had left an impact that we could never have expected. It gave us a space to share our thoughts and a chance to share our excitement at a time when we’re all separated. The experience convinced me once again that nothing will teach you more about a product than launching it.
Research → Experience → Interface → Prototyping
Research is where you gain detail on your users and the challenges they face. A talented researcher will uncover insight and empathy by monitoring and interviewing users. They’ll build tools (e.g. user personas) that will ground your team in a deep understanding of the problems you’re solving.
Experience covers the pattern of interactions, feedback, and information that define the way a user will engage with your product. Experience designers produce wireframes and flow charts that choreograph a user’s journey from their first impression to their mastery of the product.
Interface designers create visual components and screens that influence the look and feel of your product. They work with color & contrast, visual hierarchy, and animation to create a blend of familiar interactions with a unique visual identity.
Prototyping provides the team with interactive mockups for testing and iteration. Prototypers work in code or using specialized tools to quickly create experiences resembling the end product closely enough to test with users in context and rapidly iterate.
Most designers working on product teams will have expertise somewhere along this spectrum. In most cases they’ll also have some skills in adjacent domains, for example many experience designers have skills in interface design or user research.
This is a quick way to understand designers and design teams, their strengths, and any gaps that might exist. If you’re a small company looking to hire your first designer, then it helps to understand which set of skills you’re looking for and which candidate is the best fit. As you grow your team you can use it to balance complimentary skillsets. At Human Collective, when we’re helping a client decide where at start, we always recommend hiring from the inside out and maintaining a balance that matches their needs.
A word of caution: any useful model is a generalization, and is therefore destined to be wrong around the edges. That said, I’ve found this to be incredibly useful when explaining design roles to non-designers.
How I think about design specialization. was originally published in Human Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
If someone at work asks their manager “Why do we hold this weekly meeting?” or “Why does our product have this feature?” the response should never be “Why not?”
Why is it important? Maybe it’s mission-critical, maybe your competitors do it, or maybe it’s just “expected” in your industry. Processes are defined with good intentions but over time conditions change and when they do our instinct is to hold our ground.
To ask “Why not?” is to justify losing effort and gaining complexity just because there’s no good reason not to. Instead, we should ask ourselves “Why?” and remove those features, meetings, etc for which we don’t have a good answer. It ensures everything we do has a clear sense of purpose, and it creates space to do the things that our competitors don’t do and that aren’t “expected” in our industry.