I have been playing around with Soketi as a self-hosted Pusher alternative and, while the software is great, boy is its documentation and error messaging lacking. If you’re trying to run it and get the error
There was an error while parsing the JSON in your config file. It has not been loaded.
This is, as near as I can tell, the minimum required set of keys to get an app working:
Without the empty webhooks array, it kept failing on me.
I still have not gotten a pm2 instance to accept a config file đď¸. I gave up on the Docker instance because it doesn’t allow more than one app per instance and I want something more flexible.
I’m sure it’s great and super easy if you’re just spinning up a single app, though!
Ernest Goes to Camp is the only movie I can recall that ends with a dramaric (frantic?) waving of a temporary injunction. After the Home Alone-esque fight betwist kids and construction workers, of course.
Right as I was getting out of newspapers I was talking with our circulation manager, who had just heard of a revolutionary new idea that was going to save the industry. As a baseline, let’s say the paper cost 75¢ per issue (I worked at a moderate-sized daily). You buy it from one of the little metal newspaper houses, 75¢. Grocery store, 75¢. Buy a subscription, you get a little discount, but there’s one flat rate you pay.
Then, one day, some economic geniuses from high atop the mount gazed into their scrying balls and noted, “Hey, rich people have more money.” From this fact, they extrapolated a theory that rich people would be more likely to spend more money than non-rich folks. Thus was born our new Model for Journalismâ˘: income-based pricing.
As you might have guessed by even a passing knowledge of the current state of the journalism industry, this did not solve the problem. Now, they rolled this out with a modicum of sense. They didn’t just suddenly jack up the rates on everybody; when subscription renewals came up, they just modified the increase so it was higher for some people than others. Because they lacked detailed demographic information on individual customers (I shudder to think what they would have done had this initiative been launched in 2024), they based it loosely on Zip codes. (This had the added benefit of making sure that neighbors wouldn’t be discussing the price of the newspaper and find out they were paying vastly different rates.)
It worked, kinda? For a little bit, anyway. Some people were willing to pay more, and the sales people were instructed that if customers put up too much of a fight, they could resub at the new standard rate. But there are two crucial flaws to this approach; I won’t name them yet, because first I want to talk about how this idea has absolutely exploded across the entire American marketplace.
Anyone who’s been to the grocery store knows that prices have gotten significantly higher since COVID. As have fast food prices, concert ticket prices, and streaming service subscription fees.
Some will point to the laws of supply and demand, which is a) facile, b) not relevant in industries like streaming, and c) not nearly enough to account for the rate of increases we’ve been seeing in consumables. The real reason, of course, is greed: Those selling think they can make more money by raising prices and enough consumers will continue to fork over the money to offset those who don’t.
Here’s where we get to the issue: This economic model ignores how people actually work.
In our newspaper example, raising rates did two things: First, it made people reconsider their model of what a newspaper is. For a long time, getting the newspaper was just what you did: it’s how you stayed informed and, as a teacher of mine once put it, “It’s what cultured people do.”
But by significantly raising the price, you force people to think of the thing they’re purchasing’s overall utility to their lives. What was once an automatic, “Yes, of course we pay for the paper,” now gets framed, internally, as “Does the paper provide $x amount of value to me?”
The second thing that raising prices does is increase awareness of the competition. In newspapers' case, this was pretty broadly known, but there was a significant percentage of people even in the early 2010s for whom getting the news via a single source delivered to their house every morning was more convenient than seeking out online or TV news sources.
But once that price goes up? Suddenly the hassle of trying to sift through information on the internet doesn’t seem so daunting. You’re more willing to experiment, because you’re saving so much money. And now the newspaper has to stand on its own as a value proposition, which isn’t a good strategy for a medium that is objectively and definitively slower, more expensive and less adaptable than its direct competition.
And we’re seeing the same thing happen now in real-time, in a variety of industries. Subway jacked up its prices 39% 2014-2024; a week ago, they had to hold a corporate emergency meeting because sales are so low. McDonalds announced its first quarter-to-quarter sales drop since 2020. These and other companies assumed they could jack up the price and enough people would cover at the new high to offset those who bailed. And, worst-case scenario, if it’s too high, they can always drop the prices back down.
But that’s not how people work. When people feel like they’re being screwed, they get bitter and hold a grudge. When people are forced to confront and try new alternatives, sometimes it turns out they liked the new option better than the old one, anyway. And any brand loyalty they may have once held is completely obliterated, so you’re not only starting from scratch, you’re actually digging yourself out of a hole.
Such is life when you’re focused solely, maniacally on the short-term. You might find yourself with no long-term options back to success.
I feel like society doesnât give the average person enough opportunities to formally and vehemently object to things. The fun is always reserved for lawyers and people who have dumb friends make bad decisions about marriage.
I feel like the biggest takeaway from all the election drama is that we as a society desperately need a shorter presidential campaign season. Like, 4 months, max: 2 for primaries, 2 for general.
Note: This content, by Anne Gibson, was originally published at the Pastry Box Project, under a Creative Commons (CC BY-NC-ND 4.0) license. I am reposting it here so that it might remain accessible to the wider web at large.
A is blind, and has been since birth. Heâs always used a screen reader, and always used a computer. Heâs a programmer, and heâs better prepared to use the web than most of the others on this list.
B fell down a hill while running to close his car windows in the rain, and fractured multiple fingers. Heâs trying to surf the web with his left hand and the keyboard.
C has a blood cancer. Sheâs been on chemo for a few months and, despite being an MD, is finding it harder and harder to remember things, read, or have a conversation. Itâs called chemo brain. Sheâs frustrated because sheâs becoming more and more reliant on her smart phone for taking notes and keeping track of things at the same time that itâs getting harder and harder for her to use.
D is color blind. Most websites think of him, but most people making PowerPoint presentations or charts and graphs at work do not.
E has Cystic Fibrosis, which causes him to spend two to three hours a day wrapped in respiratory therapy equipment that vibrates his chest and makes him cough. As an extension, it makes his arms and legs shake, so he sometimes prefers to use the keyboard or wait to do tasks that require a steady touch with a mouse. He also prefers his tablet over his laptop because he can take it anywhere more conveniently, and itâs easier to clean germs off of.
F has been a programmer since junior high. She just had surgery for gamerâs thumb in her non-dominant hand, and will have it in her dominant hand in a few weeks. Sheâs not sure yet how it will affect her typing or using a touchpad on her laptop.
G was diagnosed with dyslexia at an early age. Because of his early and ongoing treatment, most people donât know how much work it takes for him to read. He prefers books to the Internet, because books tend to have better text and spacing for reading.
H is a fluent English speaker but hasnât been in America long. Sheâs frequently tripped up by American cultural idioms and phrases. She needs websites to be simple and readable, even when the concept is complex.
I has epilepsy, which is sometimes triggered by stark contrasts in colors, or bright colors (not just flashing lights). I has to be careful when visiting brightly-colored pages or pages aimed for younger people.
J doesnât know that heâs developed an astigmatism in his right eye. He does know that by the end of the day he has a lot of trouble reading the screen, so he zooms in the web browser to 150% after 7pm.
K served in the coast guard in the 60s on a lightship in the North Atlantic. Like many lightship sailors, he lost much of his hearing in one ear. He turns his head toward the sound on his computer, but that tends to make seeing the screen at the same time harder.
L has lazy-eye. Her brain ignores a lot of the signal she gets from the bad eye. She can see just fine, except for visual effects that require depth perception such as 3-D movies.
Mcanât consistently tell her left from her right. Neither can 15% of adults, according to some reports. Directions on the web that tell her to go to the top left corner of the screen donât harm her, they just momentarily make her feel stupid.
N has poor hearing in both ears, and hearing aids. Functionally, sheâs deaf. When sheâs home by herself she sometimes turns the sound all the way up on her computer speakers so she can hear videos and audio recordings on the web, but most of the time she just skips them.
O has age-related macular degeneration. Itâs a lot like having the center of everything she looks at removed. She can see, but her ability to function is impacted. She uses magnifiers and screen readers to try to compensate.
P has Multiple Sclerosis, which affects both her vision and her ability to control a mouse. She often gets tingling in her hands that makes using a standard computer mouse for a long period of time painful and difficult.
R was struck by a car crossing a busy street. Itâs been six months since the accident, and his doctors think his current headaches, cognitive issues, and sensitivity to sound are post-concussion syndrome, or possibly something worse. He needs simplicity in design to understand what heâs reading.
S has Raynaudâs Disease, where in times of high stress, repetitive motion, or cold temperatures her hands and feet go extremely cold, numb, and sometimes turn blue. She tries to stay warm at her office desk but even in August has been known to drink tea to keep warm, or wear gloves.
T has a learning disability that causes problems with her reading comprehension. She does better when sentences are short, terms are simple, or she can listen to an article or email instead of reading it.
U was born premature 38 years ago â so premature that her vision was permanently affected. She has low vision in one eye and none in the other. She tends to hold small screens and books close to her face, and lean in to her computer screen.
V is sleep-deprived. She gets about five hours of bad sleep a night, has high blood pressure, and her doctor wants to test her for sleep apnea. She doesnât want to go to the test because they might âput her on a machineâ so instead she muddles through her workday thinking poorly and having trouble concentrating on her work.
W had a stroke in his early forties. Now heâs re-learning everything from using his primary arm to reading again.
X just had her cancerous thyroid removed. Sheâs about to be put on radioactive iodine, so right now sheâs on a strict diet, has extremely low energy, and a lot of trouble concentrating. She likes things broken up into very short steps so she canât lose her place.
Y was in a car accident that left her with vertigo so severe that for a few weeks she couldnât get out of bed. The symptoms have lessened significantly now, but that new parallax scrolling craze makes her nauseous to the point that she shuts scripting off on her computer.
Z doesnât have what you would consider a disability. He has twins under the age of one. Heâs a stay-at-home dad who has a grabby child in one arm and if heâs lucky one or two fingers free on the other hand to navigate his iPad or turn Siri on.
=====
This alphabet soup of accessibility is not a collection of personas. These are friends and family I love. Sometimes Iâm describing a group. (One can only describe chemo brain so many times.) Some people are more than one letter. (Yay genetic lottery.) Some represent stages people were in 10 years ago and some stages we know they will hit â we just donât know when.
Robin Christopherson (@usa2day) points out that many of us are only temporarily able-bodied. Iâve seen this to be true. At any given moment, we could be juggling multiple tasks that take an eye or an ear or a finger away. We could be exhausted or sick or stressed. Our need for an accessible web might last a minute, an hour, a day, or the rest of our lives. We never know.
We never know who. We never know when.
We just know that when itâs our turn to be one of the twenty-six, we will want the web to work. So today, we need to make simple, readable, effective content. Today, we make sure all our auditory content has a transcript, or makes sense without one. Today, we need to make our shopping carts and logins and checkouts friendly to everyone. Today, we need to design with one thought to the color blind, one thought to the photosensitive epileptic, and one thought to those who will magnify our screens. Today we need to write semantic HTML and make pages that can be navigated by voice, touch, mouse, keyboard, and stylus.
Forgive the lack of posts recently, a back injury has mostly confined me to bed, and I get a little sick of staring at computer screens.
But while I’ve been out of it I caught up on Aaron Sorkin’s The Newsroom, which I had never seen. As a fan of The West Wing and yes, even Studio 60, I thought, as a former journalismo myself, this would be right up my alley.
And it definitely inspired me … to get back into writing code. It was so bad. I was surprised at how bad it was. It made me question my own taste and wonder whether I’d misjudged Sorkin’s talent.
Don’t get me wrong, he has some good scripts, and some of his meaty monologues and dialogues in various things he’s written are an absolute delight.
But he’s also written the same show at least three times now? Including similar (in some cases, identical) plot points, themes, specific jokes, even a reference to using too much back medicine as an excuse for why a white man said something dumb.
In case you couldn’t tell from my recipe intro up top there, this is a post about how I reworked Newslurp, a little app I coded four years ago (right before the Big Newsletter Boom thanks to Covid!). I switched RSS services at one point and was using a “subscribe to the newsletter from the service’s email” feature, but the lack of polish in the app (and severe degradation of basic feed-reading) means I’m back on the market.
And rather than tying all my content to another proprietary app, I decided revive Newslurp so I could keep better control of everything. The app had a significant overhaul, with most of the email heavy lifting now being done in Google Apps Script (thus removing the need for Google API integration and the PECL mailparse extension, which is not readily available on shared hosts).
I also switched from MySQL to SQLite (because this is not really an application that needs a whole MySQL DB), and updated the code/dependencies to run on PHP 8.2
My biggest takeaway from the whole thing is that while I really love types, PHP does not make it easy to use them properly with collections or array-like objects. Yikes.
As always, I hope this is in some way helpful to others, but mostly it’s helpful to me! Enjoy.
Though I am no great fan of AI or its massively over-hyped potential, I also do not think it’s useless. As Molly White put it:
When I boil it down, I find my feelings about AI are actually pretty similar to my feelings about blockchains: they do a poor job of much of what people try to do with them, they can’t do the things their creators claim they one day might, and many of the things they are well suited to do may not be altogether that beneficial.
I wholeheartedly agree with those claims, and don’t want to get into the specifics of them too much. Instead, I wanted to think out loud/write about why there’s such a wide range of expectations and opinions on the current and future states of AI.
To get the easy one out of the way: Many of the most effusive AI hype people are in fit for the money. They’re raising venture capital by saying AI, they’re trying to get brought in as consultants on AI, or they’re trying to sell their AI product to businesses and consumers. I don’t think that’s a particularly new phenomenon when it comes to new technology, though perhaps there is some novelty in how many different ways people are attempting to get their slice of the cake (companies cooking up AI models, apps trying to sell AI generation to consumers, hardware and cloud providers selling the compute necessary to do all of the above, etc.).
But once we take pure profit motive out of the way, there are I think two key areas of difference in people who believe in AI wholeheartedly and those who are neutral to critical.
The first is software development experience. Those who understand what it actually means when people say “AI is thinking” tend to have an overall more pessimistic view of the pinnacle of current AI generation strategies. In a nutshell, all of the current generative models try to ingest as much content of whatever thing they’re going to be asked to output. Then, they are given a “prompt,” and they are (in simplistic terms) trying to piece together an image/string of words/video that looks most likely based on what came for.
This is why these models “hallucinate” - they don’t “know” anything specifically in the way you know that Washington, DC is the capital of the United States. It just knows that when a sentence starts “The capital of the United States is” it usually ends with the words “Washington, DC.”
And that can be useful in some instances! This is why AI does very well on low-level coding tasks - a lot of the basics of programming is pretty repetitive and pattern-based, so an expert pattern-matcher can do fairly well at guessing the most likely outcome. But it’s also why AI developer assistants produce stupid mistakes, because it doesn’t “understand” the syntax or the language or even the problem statement as a fundamental unit of knowledge. It simply reads a string of text and tries to figure out what would most likely come next.
The other thing you learn from experience are edge cases, and specifically what doesn’t work. This type of knowledge tends to accumulate only through having worked on a product before, and understanding how different pieces come together (or don’t). AI lacks this awareness of context, focusing only what immediately surrounds the section it’s working on.
But the other primary differentiator is for the layperson, who can best be understood as a consumer and it can be condensed to a single word: Taste.
… all of us who do creative work ⌠we get into it because we have good taste. But itâs like thereâs a gap, that for the first couple years that youâre making stuff, what youâre making isnât so good, OK? Itâs not that great. Itâs really not that great. Itâs trying to be good, it has ambition to be good, but itâs not quite that good. But your taste â the thing that got you into the game â your taste is still killer, and your taste is good enough that you can tell that what youâre making is kind of a disappointment to you …
I think this is true, and I think it’s the biggest differentiator between people who think what AI is capable of right now is perfectly fine and those that think it’ll all wind up being a waste of time. People who can’t or are unwilling create text/images/videos on their own think that AI is a great shortcut. This is either because the quality of what the AI can produce is better than what they can do unassisted, or they don’t have the taste to see the difference in the first place.
I don’t know that I think there’s a way to bridge that gap any more than there is to explain to people who think that criticism of any artform is “unfair” or that “well, could you do any better?” is a valid counterpoint to cultural criticism. There are simply those people whose taste is better than that what can be created only through an amalgamation of data used to train a model, and those who think that a simulacrum of art is indistinguishable (or better) than the real thing.
Software requirements are rather straightforward - if we look at the requirements document, we see simple, declarative statements like “Users can log out,” or “Users can browse and create topics.” And that’s when we’re lucky enough to get an actual requirements document.
This is not legal advice
None of the following is intended to be legal advice. I am not a lawyer, have not even read all that many John Grisham novels, and am providing this as background for you to use. If you have actual questions, please take them to an actual lawyer. (Or you can try calling John Grisham, but I doubt he’d pick up.)
But there are other requirements in software engineering that aren’t as cut-and-dried. Non-functional requirements related to things like maintainability, security, scalability and, most importantly for our purposes, legality.
For the sake of convenience, we’re going to use “regulations” and other derivations of the word to mean “all those things that carry the weight of law,” be they laws, rules, directives, court orders or what have you.
Hey, why should I care? Isn’t this why we have lawyers?
Hopefully your organization has excellent legal representation. Also hopefully, those lawyers are not spending their days watching you code. That’s not going to be fun for them or you. You should absolutely use lawyers as a resource when you have questions or aren’t sure if something would be covered under a specific law. But you have to know when to ask those questions, and possess enough knowledge when your application could be running afoul of some rule or another.
It’s also worthwhile to your career to know these things! Lots of developers don’t, and your ability to point them out and know about them will make you seem more knowledgeable (because you are!). It will also make you seem more competent and capable than another developer who does not â again, because you are! This stuff is a skillset just like knowing Django.
While lawyers may be domain experts, they aren’t always (especially at smaller organizations) and there are lots of regulations that specifically cover technology/internet-capable software that domain experts likely would not (and should not) be expected to be on top of. Further, if you are armed with foreknowledge, you don’t have to wait for for legal review after the work has been completed.
Also, you know, users are people, too. Most regulations wind up being bottom-of-the-barrel expectations that user data is safeguarded and restricting organizations from tricking users into doing things they wouldn’t have otherwise. In the same way I would hope my data and self-determination are respected, I also want to do the same for my users.
Regulatory environments
The difference in the regulatory culture between the US and the European Union is vast. I truly cannot stress how different they are, and that’s an important thing to know about because it can be easy to become fluent in one and assume the other is largely the same. It’s not. Trust me.
United States
The US tends, for the most part, to be a reactionary regulator. Something bad happens, laws or rules (eventually) get written to stop that thing from happening again.
Also, the interpretations of those rules tend to fluctuate more than in the EU, depending on things seemingly as random as which political party is in power (and controlling the executive branch, specifically) or what jurisdiction a lawsuit is filed in. We will not go in-depth into those topics, for they are thorny and leave scars, but it’s important to note. The US also tends to give wide latitude to the defense of, “but it’s our business model!” The government will not give a full pass on everything, but they tend to phrase things in terms of “making fixes” rather than “don’t do that.”
Because US regulations tend to be written in response to a specific incident or set of incidents, they tend for the most part to be very narrowly tailored or very broad (“e.g., TikTok is bad, let’s give the government the ability to jail you for 20 years for using a VPN!"), leaving little guidance to those of us in the middle. This leaves lots of room for unintended consequences or simply failing to achieve the stated goals. In 2003, Congress passed the CAN-SPAM Act to “protect consumers and businesses from unwanted email.” As anyone who ever looks at their spam box can attest, CAN-SPAM’s acronym unfortunately seems to have meant “can” as in “grant permission,” not “can” as in “get rid of.”
European Union
In contrast, the EU tends to issue legislation prescriptively; that is, they identify a general area of concern, and then issue rules about both what you can and cannot do, typically founded in some fundamental right.
This technically is what the US does on a more circumspect level, but the difference is the right is the foundational aspect in the EU, meaning it’s much more difficult to slip through a loophole.
From a very general perspective, this leads to EU regulations being more restrictive in what you can and can’t do, and the EU is far more willing to punish punitively those companies who run afoul of the law.
Global regulations
There are few regulations that apply globally, and usually they come about backwards - in that a standard is created, and then adopted throughout the world.
Accessibility
In both the US and the EU, the general standard for digital accessibility is WCAG 2.1, level AA. If your website or app does not meet (most) of that standard, and you are sued, you will be found to be out of compliance.
In the US, the reason you need to be compliant comes from a variety of places. The federal government (and state governments) need to be compliant because of the Rehabilitation Act of 1974, section 508. Entities that receive federal money (including SNAP and NSF grants) need to be compliant because of the RA of 1974, section 504. All other publicly accessible organizations (companies, etc.) need to have their websites compliant because of the Americans with Disabilities Act and various updates. And all of the above has only arisen through dozens of court cases as they wound their way through the system, often reversing each other or finding different outcomes with essentially the same facts. And even then, penalties for violating the act are quite rare, with the typical cost being a) the cost of litigation, and b) the cost of remediation and compliance (neither of which are small, but they’re also not punitive, either).
In the EU, they issued the Web Accessibility Directive that said access to digital information is a right that all persons, including those with disabilities, should have, so everything has to be accessible.
See the difference?
WCAG provides that content should be
Perceivable - Your content should be able to be consumed in more than one of the senses. The most common example of this is audio descriptions on videos (because those who can’t see the video still should be able to glean the relevant information from it).
Operable - Your content should usable in more than one modality. This most often takes the form of keyboard navigability, as those with issues of fine motor control cannot always handle a mouse dextrously.
Understandable - Your content should be comprehensible and predictable. I usually give a design example here, which is that the accessibility standard actually states that your links need to be perceivable, visually, as links. Also, the “visited” state is not just a relic of CSS, it’s actually an accessibility issue for people with neurological processing differences who want to be able to tell at a glance what links they’ve already been to.
Robust - Very broadly, this tenet states you should maximize your compliance with accessibility and other web standards, so that current and future technologies can take full advantage of them without requiring modification to existing content.
Anyway, for accessibility, there’s a long list of standards you should be meeting. The (subjectively) more important ones most frequently not followed are:
Provide text alternatives for all non-text content: This means alt text for images, audio descriptions for video and explainer text for data/tables/etc. Please also pay attention to the quality â the purpose of the text is to provide a replacement for when the non-text content can’t be viewed, so “picture of a hat” is probably not an actual alternative.
Keyboard control/navigation: Your site should be navigable with a keyboard, and all interactions (think slideshows, videos) should be controllable by a keyboard.
Color contrast: Header text should have a contrast ratio of 3:1 between the foreground and background; smaller text should have a ratio of 4.5:1.
Don’t rely on color for differentiation: You cannot rely solely on color to differentiate between objects or types of objects. (Think section colors for a newspaper website: You can’t just have all your sports links be red, it has to be indicated some other way.)
Resizability: Text should be able to be resized up to 200% larger without loss of content or functionality
Images of text: Don’t use ‘em.
Give the user control: You can autoplay videos or audio if you must, but you also have to give the user the ability to stop or pause it.
There are many more, but these are the low-hanging fruit that lots of applications still can’t manage to pick off
PCI DSS
The Payment Card Industry Data Security Standard is a set of standards that govern how you should store credit card data, regulated by credit card companies themselves. Though some individual US states require adherence to the standards (and fine violators appropriately), federal and EU law does not require you to follow these standards (at least, not specifically these standards). However, the credit card companies themselves can step in and issue fines or, more critically, cut off access to their payment networks if they find the breaches egregious enough.
In most cases, organizations offload their payment processing to a third party (e.g., Stripe, Paypal), who is responsible for maintaining compliance with the specification. However, you as the merchant or vendor need to make sure youâre storing the data from those transactions in the manner provided by the payment processor; itâs not uncommon to find places that are storing too much data on their own infrastructure that technically falls under the scope of PCI DSS.
Some of the standards are pretty basic - donât use default vendor passwords on hardware and software, encrypt your data transmissions. Some are more involved, like restricting physical access to cardholder data, or monitoring and logging access to network resources and data.
EU regulations
GDPR
The EU’s General Data Privacy Regulation caused a big stir when it was first released, and for good reason. It completely changed the way that companies could process and store user data, and severely restricted what sort of shenanigans companies can get up to.
The GDPR states that individuals have the right to not have their information shared; that individuals should not have to hand over their information in order to access goods or services; and that individuals have further rights to their information even once it’s been handed over to another organization.
For those of us on the side of building things, it means a few things are now requirements that used to be more “nice-to-haves.”
You must get explicit consent to collect data If you’re collecting data on people, you have to explicitly ask for it. You have to specify exactly what information you’re collecting, the reason you’re collecting it, how long you plan on storing it and what you plan to do with it (this is the reason for the proliferation of all those cookie banners a few years ago). Furthermore, you must give your users the right to say no. You can’t just pop up a full-screen non-dismissable modal that doesn’t allow them to continue without accepting it.
You can only collect data for legitimate purposes Just because someone’s willing to give you data doesn’t mean you’re allowed to take it. One of my biggest headaches I got around GDPR was when a client wanted to gate some white papers behind an email signup. I patiently explained multiple times that you can’t require an email address for a good or service unless the email address was required to provide said good or service. No matter how many times the client insisted that he had seen someone else doing the same thing, I stood firm and refused to build the illegal interaction.
Users have the right to ask for the data you have stored, and to have it deleted Users can ask to see what data you have stored on them, and you’re required to provide it (including, again, why you have that data stored). And, unless it’s being used for legitimate processing purposes, you have to delete that data if the user requests it (the “right to be forgotten”).
And all of this applies to any organization or company that provides a good or service to any person in the EU. Not just paid, either â it explicitly says that you do not have to charge money to be covered under the GDPR. So if your org has an app in the App Store that can be downloaded in Ireland, Italy, France or any other EU country, it and likely a lot more of your company’s services will fall under GDPR.
As for enforcement, organizations can be fined up to âŹ20 million, or up to 4% of the annual worldwide turnover of the preceding financial year, whichever is greater. Amazon Europe got docked âŹ746 million for what was alleged “[manipulation of] customers for commercial means by choosing what advertising and information they receive[d]” based on the processing of personal data. Meta was fined a quarter of a billion dollars a few different times.
But it’s not just the big companies. A translation firm got hit with fines of âŹ20K for “excessive video surveillance of employees” (a fine that’s practically unthinkable in the US absent cameras in a private area such as the bathroom), and a retailer in Belgium had to pay âŹ10K for forcing users to submit an ID card to create a loyalty account (since that information was not necessary to creating a loyalty account).
Digital Markets Act
The next wave of regulation to hit the tech world was the Digital Markets Act. which is aimed specifically at large corporations that serve a âgatekeeping functionalityâ in digital markets in at least three EU countries. Although it is not broadly applicable, it will change the way that several major platforms will work with their data.
The directiveâs goal is to break up the oversized share that some platforms have in digital sectors like search, e-commerce, travel, media streaming, and more. When a platform controls sufficient traffic in a sector, and facilitates sales between businesses and users, it must comply with new regulations about how data is provisioned and protected.
Specifically, those companies must:
Allow third parties to interoperate with their services
Allow businesses to access the data generated on the platform
Provide advertising partners with the tools and data necessary to independently verify claims
Allow business users to promote and conduct business outside of the platform
Additionally, the gatekeepers cannot:
Promote internal services and products over third parties
Prevent consumers from linking up with businesses off their platforms
Prevent users from uninstalling preinstalled software
Track end users for the purpose of targeted advertising without usersâ consent
If it seems like these are aimed at the Apple App Store and Google Play Store, well, congrats, you cracked the code. The DMA aims to help businesses have a fairer environment in which to operate (and not be completely beholden to the gatekeepers), and allow for smaller companies to innovate without being hampered or outright squashed by established interests.
US regulations
The US regulatory environment is a patchwork of laws and regulations written in response to various incidents, and with little forethought for the regulatory environment as a whole. Itâs what allows you as a developer to say, âWell, that depends âŚâ in response to almost any question, to buy yourself time to research the details.
HIPAA
Likely the most well-known US privacy regulation, HIPAA covers almost none of the things that most people commonly think it does. We’ll start with the name: Most think it’s HIPPA, for Health Information Privacy Protection Act. It actually stands for Healthcare Insurance Portability and Accountability Act, because most of the law has nothing to do with privacy.
It is very much worth noting that HIPAA only applies to health plans, health care clearinghouses, and those health care providers that transmit health information electronically in connection with certain administrative or financial transactions where health plan claims are submitted electronically. It also applies to contractors and subcontractors of the above.
That means most of the time when people publicly refuse to comment on someone’s health status because of HIPAA (like, in a sports context or something), it’s nonsense. They’re not required to disclose it, but it’s almost certainly not HIPAA that’s preventing them from doing so.
What is relevant to us as developers is the HIPAA Privacy Rule. The HIPAA privacy rule claims to “give patients more control over their health information, set boundaries on the use of their health records, establish appropriate safeguards for the privacy of their information.”
What it does in practice is require that you have to sign a HIPAA disclosure form for absolutely every medical interaction you have (and note, unlike GDPR, that they do not have to let you say “no”). Organizations are required to keep detailed compliance policies around how your information is stored and accessed. While the latter is undoubtedly a good thing, it does not rise to the level of reverence indicated by its stated goals.
What you as a developer need to know about HIPAA is you need to have very specific policies (think SOC II [official link] [more useful link]) around data access, operate using the principle of least privileged access (only allow those who need to see PHI to be able to access it), and specific security policies related to the physical facility where the data is stored.
HIPAAâs bottom line is that you must keep safe Protected Health Information (PHI), which covers both basic forms of personally identifiable information (PII) such as name, email, address, etc., as well as any health conditions those people might have. This seems like a no-brainer, but it can get tricky when you get to things like disease- or medicine-specific marketing (if youâre sending an email to someoneâs personal email address on a non-HIPAA-compliant server about a prostate cancer drug, are you disclosing their illness? Ask your lawyer!).
There are also pretty stringent requirements related to breach notifications (largely true of a lot of the compliance audits as well). These are not things you want to sweep under the rug. Itâs true that HIPAA does not see many enforcement acts around the privacy aspects as some of the other, jazzier regulations. But health organizations also tend to err on the side of caution and use HIPAA-certified hosting and tech stacks, as any medical provider will be sure to complain about to you if you ask them how they enjoy their Electronic Medical Records system.
Section 230 of the Communications Decency Act
Also known as the legal underpinnings of the modern internet, Section 230 provides that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
In practice, this means that platforms that publish user-generated content (UGC) will not be treated as the “publisher,” in the legal sense, of that content for the purposes of liability for libel, etc. This does not mean they are immune from copyright or other criminal liabilities but does provide a large measure of leeway in offering UGC to the masses.
It’s also important to note the title of the section, “Protection for private blocking and screening of offensive material.” That’s because Section 230 explicitly allows for moderation of private services without exposing the provider to any liability for failing to do so in some instances. Consider a social media site that bans Nazi content; if that site lets a few bad posts go through, it does not mean they are on the hook for those posts, at least legally speaking. Probably a good idea to fix the errors lest they be found guilty in the court of public opinion, though.
GLBA
The Graham-Leach-Biley Act is a sort of privacy protection policy for financial institutions. It doesnât lay out anything particular novel or onerous - financial institutions need to provide a written privacy policy (what data is collected, how itâs used, how to opt-out), and provides some guidelines companies need to meet about safeguarding sensitive customer information. The most interesting, to me, requirement is Pretext Protection, which actually enshrines in law that companies need to have policies in place for how to prevent and mitigate social engineering attacks, both of the phishing variety as well as good old-fashioned impersonation.
COPPA
The Children’s Online Privacy Protection Rule (COPPA, and yes, itâs infuriating that the acronym doesnât match the name) is one of the few regulations with teeth, largely because it is hyperfocused on children, an area of lawmaking where overreaction is somewhat common.
COPPA provides for a number of (now) common-sense rules governing digital interactions that companies can have with children under 13 years old. Information can only be collected with:
Explicit parental consent.
Separate privacy policies must be drafted and posted for data about those under 13.
A reasonable means for parents to review their children’s data.
Establish and maintain procedures for protecting that data, including around sharing that data.
Limits on retention of that data.
Prohibiting companies from asking for more data than is necessary to provide the service in question.
Sound weirdly familiar, like GDPR? Sure does. Wondering why only children in the US are afforded such protections? Us too!
FERPA
The Family Educational Rights Protection Act is sort of like HIPAA, but for education. Basically, it states that the parents of a child have a right to the information collected about their child by the school, and to have a say in the release of said information (within reason; they can’t squash a subpoena or anything). When the child reaches 18, those rights transfer to the student. Most of FERPA comes down to the same policy generation around retention and access discussed in the section on HIPAA, though the disclosure bit is far more protective (again, because it’s dealing with children).
FTC Act
The Federal Trade Commission Act of 1914 is actually the law that created the Federal Trade Commission, and the source of its power. You can think of the FTC as a quasi-consumer protection agency, because it can (and, depending on the political party in the presidency, will) go after companies for what aren’t even really violations of law so much as they are deemed “unfair.” The FTC Act empowers the commission to prevent unfair competition, as well as protect consumers from unfair/deceptive ads (though in practice, this has been watered down considerably by the courts).
Nevertheless, of late the FTC has been on a roll, specifically targeting digital practices. An excellent recent example was the settlement by Epic Games, makers of Fortnite. The FTC sued over a number of allegations, including violations of COPPA, but it also explicitly called out the company for using dark patterns to trick players into making purchases. The companyâs practice of saving any credit cards used (and then making that card available to the kids playing), confusing purchasing prompts and misleading offers were specifically mentioned in the complaint.
CAN-SPAM
Quite possibly the most useless technology law on the books, CAN-SPAM (Controlling the Assault of Non-Solicited Pornography And Marketing Act) clearly put more time into the acronym than the legislation. The important takeaways are that emails need:
Accurate subjects
To disclose themselves as an ad
Unsubscribe links
A physical address for the company
And as your spam box will tell you, it solved the problem forever. This does not, however, mean you can ignore its strictures! As a consultant at a company that presumably wishes to stay on the right side of the law, you should still follow its instructions.
CCPA and Its Ilk
The California Consumer Privacy Act covers, as its name suggests, California residents in their dealings with technology companies. Loosely based on the GDPR, CCPA requires that businesses disclose what information they have about you and what they do with it. It covers items such as name, social security number, email address, records of products purchased, internet browsing history, geolocation data, fingerprints, and inferences from other personal information that could create a profile about your preferences and characteristics.
It is not as wide-reaching or thorough as GDPR, but itâs better than the (nonexistent) national privacy law.
The CCPA applies to companies with gross revenues totaling more than $25 million, businesses with information about more than 50K California residents, or businesses who derive at least 50% of their annual revenue from selling California residentsâ data. There are similar measures that have already been made law in Connecticut, Virginia, Colorado, and Utah, as well as other states also considering relevant bills.
Other state regulations
The joy of the United Statesâ federalist system is that state laws can be different (and sometimes more stringent!) than federal law, as we see with CCPA. It would behoove you to do a little digging into the state regulations when youâre working with specific areas â e.g., background checks, where the laws differ from state to state, as even though youâre not based there, you may be subject to its jurisdiction.
There are two different approaches companies can take to dealing with state regulations: Either treat everyone under the strictest regulatory approach (e.g., treat every user like theyâre from California) or make specific carve-outs based on the state of residence claimed by the user.
It is not uncommon, for example, to have three or four different disclosures or agreements for background checks ready to show a user based on what state they reside in. The specific approach you choose will vary greatly depending on the type of business, the information being collected, and the relevant state laws.
How to implement
Data compliance is critical, and the punitive aspects of GDPRâs enforcement means your team must have a solid strategy for compliance.
The most important aspect of dealing with any regulatory issue is first knowing whatâs required for your business. Yes, youâre collecting emails, but to what end? If that data is necessary for your business to function, then you have your base-level requirements.
Matching those up against the relevant regulations will provide you with a starting point from which you can begin to develop the processes, procedures and applications that will allow your business to thrive. Donât rely on âthatâs how weâve always done itâ or âweâve seen other people do xâ as a business strategy.
The regulatory environment is constantly shifting, and itâs important to both keep abreast of changes as well as always knowing what data and services are integral to your businessâs success. Keeping up with the prevalent standards will aid you not only in not getting sued, but also ensuring your companies that youâre a trustworthy and reliable partner.
How to keep up
It all seems a little daunting, no?
But you eat the proverbial regulatory elephant the same way you do any other large food item: one bite at a time. In the same way you didnât become an overnight expert in securing your web applications against cross-site scripting attacks or properly manage your memory overhead, becoming a developer whoâs well-versed in regulatory environments is a gradual process.
Now that you know about some of the rules that may apply to you, you know what to keep an eye out for. You know potential areas to research when new projects are pitched or started, and you know where to ask questions. You know to both talk to and listen to your companyâs legal team when they start droning on about legalistic terms
People always seem confused by the title of this: “What does scrum have to do with measuring productivity?” they ask. And I smile contentedly, because that’s the whole point.
Scrum is supposed to be a system for managing product work, iterating and delivering value to the customer. What usually winds up happening is scrum gets used for the management of software development work as a whole, from decisions about promotion to hiring and firing to everything else. That’s not what scrum is designed to do, and it shows.
Now, I love to talk about process improvement, completely agnostic of whatever process framework you’re using. I would much rather have a discussion about the work you’re doing and what blockers you’re hitting rather than discussing abstract concepts.
However, if you keep running into the same issues and blockers over and over again, it’s usually worth examining your workflows to find out if you’re actually applying the theory behind your framework to the actual work you’re doing. The concept of Agile specifically is not about the processes involved, but you need to know and understand the rules before you should feel comfortable breaking them.
Processes
I want to start with a quick overview of a few key terms to make sure everyone’s on the same page.
Waterfall
In waterfall development, every item of work is scheduled out in advance. This is fantastic for management, because they can look at the schedule to see exactly what should be worked on, and have a concrete date by which everything will done.
This is horrible for everyone, including management, because the schedule is predicated upon developers being unerring prophets who are able to forecast not only the exact work that needs to be done to develop a release, but also the exact amount of time said work will take.
The ultimate delimiter of the work to be done is the schedule - usually thereâs a specific release date (hopefully but not always far out enough to even theoretically get all the work done); whatever can get done by that date tends to be whatâs released.
Waterfall also suffers greatly because itâs completely inflexible. Requirements are gathered months ahead of time; any changes require completely reworking the schedule, so changes are frowned upon. Thus, when the product is released, itâs usually missing features that would have been extremely beneficial to have.
Agile
Agile can be viewed as a direct response to waterfall-style development; rather than a rigid schedule, the agile approach embraces iteration and quick releases. The three primary âlawsâ of agile are:
Law of the customer - The customer is the number one priority. Rather than focusing on hitting arbitrary milestones or internal benchmarks, agile teams should be focused on delivering products to customers that bring them additional value. A single line of code changed can be more worthwhile than an entirely new product if that line brings extra value to the customer.
Law of small teams - Developers are grouped into small teams that are given autonomy in how they implement the features theyâre working on. When work is assigned to a team, itâs not done so prescriptively. In the best agile teams, the assignment is, âHereâs the problem we have, go solve it.â
Law of the network - There are differing interpretations on how to implement this, but essentially I view of the network as âthe whole organization has to buy in to what the agile teams are doing.â The entire organization doesnât need to have the same structure as the agile teams, but neither can it be structured in a manner antithetical to the processes or outcomes.
The easiest counterexample is the entire dev department is using scrum, but the CTO still feels (by virtue of their title) the ability to step in and make changes or contribute code or modify stories on a whim. Just because the CTO is the manager doesnât mean they have full control over every decision. Basically, law of the network means ârespecting the agile method, even if youâre not directly involved.â
Itâs worth noting that agile is a philosophy, not a framework in and of itself. Both kanban and scrum are implementations of the agile philosophy.
Kanban
This is usually the most confusing, because both scrum and kanban can use kanban boards (the table of stories, usually denoted by physical or virtual âpost-itsâ that represent the teamâs work). Kanban board splits up work into different âstagesâ (e.g., to-do, doing, done), and provides a visual way to track progression of stories.
The primary difference between scrum and kanban as a product development methodology is that kanban does not have specific âsprintsâ of work - the delimiter of work is how many items are in a given status at a given time. For example, if team limits âdoingâ to four cards and there are already four cards in there, no more can be added until one is moved along to the next stage (usually this means developers will pair or mob on a story to get it through).
Scrum
Scrum, by contrast, delimits its work by sprints. Sprints are the collection of work the team feels is necessary to complete to deliver value. They can be variable in their length (though in practice, they tend to be a specified time length, which causes its own issues).
Scrum requires each team to have at least two people - a product owner and a scrum master. Usually there are also developers, QA and devops people on the team as well, but at a minimum you need the PO and SM.
The product owner has the vision for what the product should be - they should be in constant contact with customers, potential customers and former customers to figure out how value can be added. The scrum masterâs job is to be the sandpaper for the developers - not (as the name implies) their manager or boss, but the facilitator for ceremonies and provide coaching/guidance on stories and blockers.
Other reasons processes fail
I will note that a lot of the reasons I will list below may also apply to other product management methodologies; however, Iâm specifically limiting the scope to how they impact scrum teams.
Lack of product vision
I donât want to lay the blame entirely on product owners for this issue - very often the problem is with how the role is designed and hired for. Product owners should be the final arbiters for product decisions. They should absolutely consult design, UX and customer service experts for their opinions, but the decisions ultimately lies with them.
Unfortunately, the breadth of skills required to be a good product owner are not in abundant supply, and product owners are, bafflingly, often considered afterthoughts at many organizations.
More than specific skills, though, product owners need to have a vision for what the product could be, as well as the flexibility to adapt that vision when new information comes in. Usually, this requires domain knowledge (that can be acquired, but needs to be done so systematically and quickly upon hiring), steadfastness of conviction and the ability to analyze data properly to understand what customers want.
Far too often product owners essentially turn into feature prioitizers, regurgitating nearly everything customers say they want and assigning a ranking to it. This often comes at the expense of both the productâs conceptual integrity as well as relationships with developers, who are supposed to be given problems to solve, not features to develop. This is the classic feature factory trap.
Mistake the rules for the reason
Far too often, people will adopt the ceremonies or trappings of scrum without actually accepting an agile mindset. This is where my favorite tagline, âthatâs just waterfall with sprintsâ comes from.
If youâve ever started a project by first projecting and planning how long itâs going to take you to deliver a given set of features, congratulations, youâre using waterfall.
To use scrum, you need to adopt the iterative mindset to how you view your product. If youâre developing Facebook, you donât say, âweâre going to build a system that allows you to have an activity feed that shows posts from your friends, groups and advertisters, and have an instant messaging product, and ..â
Instead, youâd say, âweâre going to develop a platform that helps people connect to one another.â Then youâd figure out the greatest value you can add in one sprint (e.g., users can create profiles and upload their picture.). You know once you have profiles youâll probably want the ability to post on othersâ profiles, so thatâs in the backlog.
Thatâs it. Thatâs the planning you do. Because once those releases get into customersâ hands, youâll then have better ideas for how to deliver the next increment of value.
Simply because an organization has âsprintsâ and a âbacklogâ and do âretrosâ doesnât mean itsâ using scrum, it means itâs using the language of scrum.
Lack of discipline/iteration
Tacking on to the last point, not setting up your team for success in an agile environment can doom the product overall. Companies tend to like hiring more junior developers, because theyâre cheaper, but not realizing that a junior developer is not just a senior developer working at 80% speed. Junior developers need to have mentoring and code reviews, and those things take time. If the schedule is not set up to allow for that necessary training and code quality checks to happen, the product will suffer overall.
Similarly, development teams are often kept at a starting remove from everyday users and their opinions/feedback. While I by no means advocate a direct open firehose of feedback, some organizations donât ever let their devs see actual users using the product, which creates a horrible lack of feedback loop from a UX and product design perspective.
Properly investing in the team and the processes is essential to any organization, but especially one that uses scrum.
Lack of organizational shift
The last ancillary reason I want to talk about in terms of scrum failure is aligning the organization with the teams that are using scrum (weâre back to the law of network, here). Scrum does not just rely on the dev team buying in, it also requires the larger organization to at least respect the principles of scrum for the team.
The most common example of this I see is when the entire dev department is using scrum, but the CTO still feels (by virtue of their title) the ability to step in and make changes or contribute code or modify stories on a whim. Just because the CTO is the manager doesnât mean they have full control over every decision. Removing the autonomy for the team messes with the fundamental principles of scrum, and usually indicates there will be other issues as well (and I guarantee that CTO will also be mad when the scrum is now unbalanced or work doesnât get done, even though theyâre the direct cause).
No. 1 reason scrum fails: Itâs used for other purposes
By far, the biggest reason I see scrum failing to deliver is when the ceremonies or ideas or data generated by scrum gets used for something other than delivering value to the end users.
Itâs completely understandable! Management broadly wants predictability, the ability to schedule a release months out so that marketing and sales can create content and be ready to go.
But thatâs not how scrum works. Organizations are used to being able to dictate schedules for large releases of software all at once (via waterfall), and making dev deliver on those schedules. If youâre scheduling a featureset six months out, itâs almost guaranteed youâre not delivering in an agile manner.
Instead of marketing-driven development, why not flip the script and have development-driven marketing? There is absolutely no law of marketing that says you have to push a new feature the second itâs generally available. If the marketing team keeps up with whatâs being planned a sprint in an advance, that means theyâd typically have at least a full month of leadtime to prepare materials for release.
Rather than being schedulable, what dev teams should shoot for is reliability and dependability. If the dev team commits to solving an issue in a given sprint, itâd better be done within that sprint (within reason). If itâs not, itâs on the dev team to improve its process so the situation doesnât happen again.
But why does scrum get pulled off track? Most often, itâs because data points in scrum get used to mean something else.
Estimates
The two hardest problems in computer science are estimates, naming things, and zero-based indexes. Estimates are notoriously difficult to get right, especially when developing new features. Estimates get inordinately more complex when we talk about story pointing.
Story points are a value assigned to a given story. They are supposed to be relative to other stories in the sprint - e.g., a 2 is bigger than a 1, or a medium is bigger than a small, whatever. Regardless of the scale youâre using, it is supposed to be a measure of complexity for the story for prioritization purposes only.
Unfortunately, what usually winds up happening is teams adopt some sort of translation scale (either direct or indirect), something like 1 = finish in an afternoon, 2 = finish in a day, 3 = multiple days, 5 = a week, etc. But then management wants to make sure everyone is pulling their fair share, so people are gently told that 10 is the expectation for the number of points they should complete in a two-week sprint, and now we are completely off the rails.
Story points are not time estimates. Full stop.
Itâs not a contract, youâre not a traffic cop trying to make your quota. Story points are estimates of the complexity of a story for you to use in prioritization. Thatâs it.
I actually dislike measuring sprint velocity sprint-to-sprint, because I donât think itâs helpful in most cases. It actually distorts the meaning of a sprint. Remember, sprints are supposed to be variable in length; if your increment of value is small, have a small sprint. But because sprint review and retro have to happen every second Friday, sprints have to be two weeks. Because the sprint is two weeks, now we have two separate focii, and the scrum methodology drifts further and further away.
Campbellâs law is one of my favorite axioms. Paraphrased, it states:
The more emphasis placed on a metric, the more those being measured will be incentivized to game it.
In the case above, if developers are told they should be getting 10 points per sprint, suddenly their focus is no longer on the customer. Itâs now on the number of story points they have completed. They may be disincentivized to pick up larger stories, fearing they might get bogged down. Theyâre almost certainly going to overestimate the complexity of stories, because now underestimates mean theyâre going to be penalized in terms of hitting their targets.
This is where what I call the Concilio Corollary (itself a play on the uncertainty principle) comes into play:
You change the outcome of development by measuring it.
Itâs ultimately a question of incentives and focus. If you start needing to worry about metrics other than âdelivering value to the user,â then your focus drifts from same. This especially comes into play when organizations worry about individual velocity.
I donât believe in the practice of âputting stories downâ or âpick up another story when slightly blocked.â If a developer is blocked, itâs on the scrum master and the rest of the team to help them get unblocked. But I absolutely understand the desire to do so if everybodyâs expected to maintain a certain momentum, and other people letting their tasks lie to help you is detrimental to their productivity stats. How could we expect teamwork to flourish in such an environment?
So how do we measure productivity?
Short answer: donât.
Long answer: Donât measure âproductivityâ as if itâs a value that can be computed from a single number. Productivity on its own is useless.
I used to work at a college of medicine, and after a big website refresh they were all excited reporting how many pageviews the new site was getting. And it makes sense, because when we think of web analytics, we think page views and monthly visitors and time on site, all that good stuff.
Except ⌠whatâs the value of pageviews to a college? Theyâre not selling ads, where more views works out to more money. In fact, the entire point of the website was to get prospective students to apply. So rather than track âhow many people looked at this site,â what they should have been doing was looking at âhow many come to this site and then hit the âapply nowâ button,â and comparing that to the previous incarnation.
First, you need to figure out what the metrics are being used for. There are any number of different reasons you might want to measure âproductivityâ on a development team. Some potential reasons include performance reviews, deciding who to lay off, justifying costs, figuring out where/whether to invest more, or fixing issues on the development team.
But each of those reasons has a completely different dataset you should be using to make that decision. If youâre talking about performance reviews, knowing the individual velocity of a developer is useless. If itâs a junior, taking on a 5-point story might be a huge accomplishment. If youâre looking at a principal or a senior, you might actually expected a lower velocity, because theyâre spending more time pairing with other developers to mentor them or help them get unblocked.
Second, find the data that answers the question. When I worked at a newspaper, we used to have screens all over the place that showed how many pageviews specific articles were getting. Except, we didnât sell ads based on total pageviews. We got paid a LOT of money to sell ads to people in our geographical area, and a pittance for everything else. A million pageviews usually meant we had gone viral, but most of those hits were essentially worthless to us. To properly track and incentivize for best return, we should have been tracking local pageviews as our primary metric.
Similarly, if youâre trying to justify costs for your development team, just throwing the sprint velocity out there as the number to look at might work at the beginning, but that now becomes the standard youâre measured against. And once you start having to maintain features or fix bugs, those numbers are going to go down (itâs almost always easier to complete a high-point new-feature story than a high-point maintenance story, simply because you donât have to understand or worry about as much context).
There are a number of newer metrics that have been proposed as standards that dev teams should be using. I donât have an inherent problem with most of these metrics, but I do want to caution not to just adopt them wholesale as a replacement for sprint velocity. Instead, carefully consider what youâre trying to use the data for, then select those metrics that provide that data. Those metrics are SPACE and DORA. Please note that these are not all individual metrics; some of them (such as ânumber of handoffsâ) are team-based.
SPACE
⢠Satisfaction and well-being
⌠This involves developer satisfaction surveys, analyzing retention numbers, things of that nature. Try to quantify how your developers feel about their processes.
⢠Performance
⌠This might include some form of story points shipped, but would also include things like number and quality of code reviews.
⢠Activity
⌠Story points completed, frequency of deployments, code reviews completed, or amount of time spent coding vs. architecting, etc.
⢠Communication/collaboration
⌠Time spent pairing, writing documentation, slack responses, on-call/office hours
⢠Efficiency/flow
⌠Time to get code reviewed, number of handoffs, time between acceptance and deployment
DORA
DORA, or DevOps Research and Assessment, are mostly team-based metrics. They include:
⢠Frequency of deployments
⢠Time between acceptance and deployment
⢠How frequently deployments fail
⢠How long it takes to recover/restore from failed
Focus on impact
But all of these metrics should be secondary, as the primary purpose of a scrum team is to deliver value. Thus, the primary metrics should measure direct impact of work: How much value did we deliver to customers?
This can be difficult to ascertain! It requires a lot of setup and analysis around observability, but these are things that a properly focused scrum team should already be doing. When the dev team is handed a story for a new feature, one factor of that story should be success criterion: e.g., at least 10% of active users use this feature in the first 10 days. That measurement should be what matters most. And failing to meet that mark doesnât mean the individual developer failed, it means some underlying assumption (whether itâs discoverability or user need) is flawed, and should be corrected for the next set of iterations.
It comes down to outcome-driven-development vs. feature-driven-development. In scrum, you should have autonomous teams working to build solutions that provide value to the customer. That also includes accountability for the decisions that were made, and a quick feedback loop coupled with iteration to ensure that quality is being delivered continuously.
TL;DR
In summation, these are the important bits:
⢠Buy in up and down the corporate stack - structure needs to at least enable the scrum team, not work against it
⢠Donât estimate more than you need to, and relatively at that
⢠Know what youâre measuring and why
Now, I know individual developers are probably not in a position to take action at the level âstop using metrics for the wrong reasons.â Thatâs why I have a set of individual takeaways you can use.
⢠Great mindset for performance review
⌠I am a terrible self-promoter, but keeping in mind the value I was creating made it easy for me come promotion time to say, âthis is definitively what I did and how I added value to the team.â It made it much easier for me than trying to remember what specific stories I had worked on or which specific ideas were mine.
⢠Push toward alignment
⌠Try to push your leaders into finding metrics that answer the questions theyâre actually asking. You may not be able to get them to abandon sprint velocity right off the bat, but the more people see useful, actionable metrics the less they focus on useless ones.
⢠Try to champion customer value
⌠Itâs what scrum is for, so using customer value as your North Star usually helps cut through confusion and disagreement.
⢠Get better at knowing what you know / don’t know
⌠This is literally the point of sprint retros, but sharing understanding of how the system works will help your whole team to improve the process and produce better software.
The Game is a mind game in which the objective is to avoid thinking about The Game itself. Thinking about The Game constitutes a loss, which must be announced each time it occurs.
The programming version of The Game has the same rules, but you lose if you think about David Heinemeier Hansson (aka DHH).
Athletics bans don’t affect me, personally, in terms of preventing me from playing sports - I’m well beyond the age or ability for it to matter.
But that fact doesn’t make it feel any less like another punch to the head, another hit to the gut, another in a long line of kicks when I already feel so beaten down.
I can’t explain this feeling.
It’s yet another way of being told that we’re different, separate from, less than. Trans women are women except. Trans men are men but.
It’s especially disheartening when so many struggle to have even the basic aspects of their dignity respected (names, pronouns, getting an education, not getting fired for existing while trans). Time and again, the only concrete actions taken are to strip more from us.
I can’t feel.
It’s a systematic desecration of our humanity, a systemic approach to telling us not only do we not belong, but that we shouldn’t exist.
I grew up on Clean Code, both the book and the concept. I strove for my code to be âclean,â and it was the standard against which I measured myself.
And I donât think I was alone! Many of the programmers Iâve gotten to know over the years took a similar trajectory, venerating CC along with Code Complete and Pragmatic Programmer as the books everyone should read.
But along the way, âcleanâ started to take on a new meaning. Itâs not just from the context of code, either; whether in interior design or architecture or print design, âcleanâ started to arise as a synonym for âminimalism.â
This was brought home to me when I was working with a junior developer a couple years ago. I refactored a component related to one we working on together to enable necessary functionality, and I was showing him the changes. This was a 200-line component, and he skimmed it about 45 seconds before saying âNice, much cleaner.â
And it bugged me, but I wasnât sure why. He was correct - it was cleaner, but it felt like that shouldnât have been something he was accurately able to identify simply by glancing at it. Or at least, if that was the metric he was using, âcleanâ wasnât cutting it.
Because the fact of the matter is you canât judge the quality of code without reading it and understanding what itâs trying to do, especially without considering it in the context of its larger codebase. You can find signifiers (e.g., fewer lines of code, fewer methods in a class), but âterseâ is not a direct synonym of âclean.â Sometimes less code is harder to understand or maintain than more code.
I wanted to find an approach, a rubric, that allowed for more specificity. When I get feedback, I much prefer hearing the specific aspects that are being praised or need work on - someone telling me âthat codeâs cleanâ or not isnât particularly actionable.
So now I say code should be Comprehensible, Predictable and Maintainable. I liked those three elements because theyâre important on their own, but also each builds on the others. You cannot have predictable and maintainable code unless itâs also comprehensible, for example.
Comprehensible - People other than the author, at the time the code is written, can understand both what the code is doing and why.
Predictable - If we look at one part of the code (a method, a class, a module), we should be able to infer a number of properties about the rest.
Maintainable - Easy to modify and keep up, as code runs forever
Comprehensibility is important because we donât all share the context - even if youâre the only person whoâs ever going to read the code, the you of three weeks from now will have an entirely different set of issues youâre focusing on, and will not bring the same thoughts to bear when reasoning about the code. And, especially in a professional context, rare is the code thatâs only ever read by one other person.
Predictability speaks to cohesion and replicability across your codebase. If I have a method load on a model responsible for pulling that objectâs information from the database, all the other models should use load when pulling object info from the DB. Even though you could use get or loadFromDb or any number of terms that are still technically comprehensible, the predictability of using the same word to mean the same thing reduces overall cognitive load when reasoning about the application. If I have to keep track of which word means the action Iâm trying to take based on which specific model Iâm using, thatâs a layer of mental overhead thatâs doing nothing toward actually increasing the value or functionality of the software.
Maintainability is the sort of an extension of comprehensibility - how easy is it the code to change or fix down the road? Maintainability includes things like the âopen to extension, closed to modificationâ principle from SOLID, but also things like comments (which weâll get to, specifically, later on). Comprehensibility is focused on the âwhatâ the code is doing, which often requires in-code context and clear naming. Maintainability on the other hand, focuses on the âwhyâ - so that, if I need to modify it later on, I know what the intent of the method/class/variable was, and can adjust accordingly.
The single most important aspect of CPM code is naming things. Naming stuff right is hard. How we name things influences how we reason about them, how we classify them, and how others will perceive them. Because those names eventually evolve to carry meaning on their own, which can be influenced by outside contexts, and that whole messy ball of definition is what the next person is going to be using when they think about the thing.
I do believe most programmers intellectually know the importance of naming things, but itâs never given the proper level of respect and care its importance would suggest. Very rarely do I see code reviews that suggest renaming variables or methods to enhance clarity - basically, the rule is if itâs good enough that the reviewer understands it at that moment, thatâs fine. I donât think it is.
A class called User should contain all the methods related to the User model. This seems like an uncontroversial stance. But you have to consider that model in the context of its overall codebase. If there is (and there should be) also a class called Authorization in that codebase, there are already inferences we should be able to draw simply from the names of those two things.
We should assume User and Authorization are closely related; I would assume that some method in Authorization is going to be responsible for verifying that the user of the application is a User allowed to access parts of the application. I would assume these classes are fairly tightly coupled in some respects, and it would be difficult to use one without the other, in some respect.
Names provide signposts and architecture hints of the broader application, and the more attuned to them you are (as both a writer and reader of code), the more information will be able to be conveyed simply by paying attention to them.
If naming is the single most important aspect of CPM, the single most important aspect of naming things is consistency. I personally donât care about most styling arguments (camelCase vs. snake_case, tabs vs. spaces, whatever). If thereâs a style guide for your language or framework, my opinion is you should follow it as closely as possible, deviating only if thereâs an actual significant benefit to doing so.
Following style conventions has the two advantages: allowing for easier interoperability of code from different sources, and enabling the use of linters and formatters.
Code is easier to share (both out to others and in from others) if they use same naming conventions and styles, because youâre not adding an extra layer of reasoning atop the code. If you have to remember that Library A uses camelCase for methods but Framework B uses snake_case, thatâs however large a section of your brain that is focusing on something other than the logic of what the code is doing.
And enabling linters and formatters means thereâs a whole section of code maintenance you no longer have to worry about - you can offload that work to the machine. Remember, computers exist to help us solve problems and offload processing. A deterministic set of rules that can be applied consistently is literally the class of problems computers are designed to handle.
Very broadly, my approach to subjective questions is: Be consistent. Anything that doesnât directly impact comprehensibility is a subjective decision. Make a decision, set your linter or formatter, and never give another thought to it. Again, consistency is the most important aspect of naming.
But a critically under-appreciated aspect of naming is the context of the author. Everyone sort of assumes we all share the same context, in lots of ways. âBecause we work on the same team/at the same company, the next developer will know the meaning of the class PayGrimples.â That may be very broadly true, in that theyâve probably heard of PayGrimples, but it doesnât mean they share the same context.
A pop-culture example of this is pretty easy - think of the greatest spaceship pilot in the universe, one James Tiberius Kirk. Think about all his exploits, all the strange new worlds heâs discovered. Get a good picture of him in your head.
Which one did you pick? Was it The Original Seriesâ William Shatner? The new moviesâ Chris Pine? Or was it Strange New Worldsâ Paul Wesley?
You werenât wrong in whatever you picked. Any of those is a valid and correct answer. But if we were talking about Kirk in conversation, you likely would have asked to clarify which one I meant. If we hadnât, we could talk about two entirely different versions of the same concept indefinitely until we hit upon a divergence point when one of us realized.
Code has that same issue, except whoeverâs reading it canât ask for that clarification. And they can only find out theyâre thinking about a different version of the concept if they a) read and digest the code in its entirety before working on it, or b) introduce or uncover a bug in the course of changing it. So when we name things, we should strive for the utmost clarity.
â Unclear without context
type User = {
id: number;
username: string;
firstName: string;
lastName: string;
isActive: boolean;
}
The above is a very basic user model, most of whose properties are clear enough. Id, username, firstName and lastName are all pretty self-explanatory. But then we get to the boolean isActive.
This could mean any number of things in context. They include, but are not limited to:
The user is moving their mouse on the screen right now
The user has a logged-in session
The user has an active subscription
The user has logged in within the last 24 hours
The user has performed an authenticated activity in the last 24 hours
The user has logged in within the last 60 days
All of those are things we may want to know about the user of any application, depending on what weâre trying to do. Even similar-sounding events with the same time horizon (logged in within the last 24 hours vs. authenticated activity in the last 24 hours) give us different information - I can infer the maximum age of the authentication token in the logged-in case, but without knowing the token exchange process, I cannot make the same inference for authenticated activity.
So why not just provide the meaning with the name?
â Clarity without context
type User = {
id: number;
username: string;
firstName: string;
lastName: string;
loggedInPrevious24Hours: boolean;
}
Clarity comes through naming things explicitly. Ambiguity is the enemy of clarity, even when you assume the person reading the code should know something.
Itâs reasonable to assume that the people reading your code are developers - that is, people familiar with coding concepts. Every other context (industry/domain, organization) is not a safe assumption. Therefore, if you have names or terms that are also used in coding, you should clarify the other meaning. (You should do this generally, as well, but specifically with programming-related terms.)
â Ambiguity kills comprehension
class Class {}
class Post {}
The word âclassâ is generally understanding in programming as an object-oriented prototype. Outside of programming, it could refer to a classroom of children; a classification system; a group of children in the same grade (e.g., junior class); or a social hierarchy (e.g., upper-class, lower-class).
Post is even worse, because it can be a verb or a noun even in a programming context. Blogs usually have posts, but you can also post content (or the HTTP verb, POST). Non-tech-wise, we have places to which you can be sent (âIâm being sent to our post in Londonâ), referring to the mail system, or even structural support for fences.
â Specificity aids everyone
class Classroom {}
class BlogPost {}
All of this is important because being clear matters more than being concise or clever. After consistency, the most important aspect of naming is being descriptive. The name should describe what the code is doing (vs. why or how) - what a method is doing, or what purpose a variable serves.
For the most part, Classes should be nouns, because theyâre describing their domain of influence. Methods should include verbs, because theyâre performing actions. Variables should be nouns, reflective of whatever purpose theyâre serving.
If you find yourself struggling with the length of your names of methods, variables or classes, thatâs not a bad thing. Itâs usually a sign you need to consider refactoring (more on this a bit later).
To the point of clarity, be sure to use properly spelled real words and names.
â Abbreviations and shortcuts
class DateUtil {
static function dateStrFrmo(date: Date): string { ... }
}
Humans have surprisingly good short- and medium-term recall around words and names. Using real words and names makes the concept easier for us to reason about, and easier to keep track of in our heads.
I took the example above from a GitHub code search. I think the original example may have been written by a native German speaker, because if we assume âFrmoâ is supposed to be âFrom,â itâs using the German sentence structure that puts the verb at the end of the sentence. That makes sense! But if someone isnât familiar with that sentence construction, the name of the method becomes functionally useless.
The misspelling part is important in two respects: one, it can introduce confusion (is it supposed to be âfromâ or âformâ?). The other is relying on the computer - searches, within the IDE or if youâre GREPing, are looking for specific terms. If itâs spelled wrong, itâs not going to get caught in the search.
â Use properly spelled real words and names
class DateUtil {
static function getStringFromDate(date: Date): string { ... }
}
Here weâve modified it so we essentially have an English sentence - get the string from the date. I know whatâs being passed in (the date), and I know whatâs coming out (the string), and I know overall whatâs happening (Iâm getting the string from the date).
Beyond naming, there is one other âbigâ rule that gets us to comprehensible, predictable and maintainable code, an old adage: âKeep it simple, sweetheart.â Iâm not speaking to system complexity here - your overall architecture should be as complex as needed to do the job. Itâs closer to SOLIDâs single-responsibility principle, writ large: Every module, every class, every method, every variable should have one job.
To our earlier example of Users and Authorization, users will take care of the users while authorization handles auth. Neither of them should care about the internal workings of the other; Authorization just needs to know it can call User::load to return the user object.
At the method level, this is how we keep our names to a manageable length. You should be able to describe what the method in a very short sentence. If you need more length (or you leave out things itâs doing), itâs probably a sign that the method is trying to do too much.
Smaller methods enable reusability - if the method is only doing a specific thing, we are more likely to be able to use it somewhere else. If the method is doing multiple things, weâd likely need to add a parameter in the other cases where we want to use it, because we donât want all of those things to happen all the time.
Keeping each method to a single task means we can decompose complex methods into multiple individual methods. This also makes it easier to read the code.
Literally just reading the names of methods allows us to infer whatâs going on, divorced of context. For the example below, we would know from the file name this is Typescript, and Iâll give one hint that itâs frontend.
â Keep it simple, even for complex actions
function constructor() {
this.assignElements();
this.setInterval();
this.getNewArt();
this.listenForInstructions();
}
Initializing this class assigns elements and sets an interval (meaning there are actions that happen on a set schedule); then we get new art, and listen for instructions. Without even knowing the name of the class, we can pretty confidently assume this has to do with art, and that art gets changed frequently (hence the interval). But there also appears to be a manual interruption possible, with listen for instructions.
If we were debugging an issue related to keyboard commands, I would first look to listenForInstructions. If thereâs an issue with art not showing up, I would check getNewArt.
Each method is narrowly scoped, even if a lot happens. Keeping things simple aids comprehension and predictability, but itâs also vital for maintainability. It makes it much easier to write tests.
We cannot confidently change code without tests. If weâre making modifications to code without tests, we can read, guess and hope that it wonât create any downstream issues, but unless we know exactly what should happen with a given method in all the ways it can be used, we cannot be certain of the impact of any change. A downstream issue is the definition of a regression; determining the output of a method in changing circumstances is the definition of a test. Thus why we test to avoid regressions.
A good unit test is like a science experiment - a hypothesis is proffered, and borne out or disproven through data, accounting for variables. In programming, variables are literally variables.
If we know exactly what our code will do, we have the flexibility to use it in different circumstances. That may sound tautological, but the confidence we know âexactlyâ what it will do comes through tests, not an internal sense of âI know what that function does." I would argue most bugs arise through either typos or incorrect assumptions. Most of the typo bugs are caught before release. Most of the insidious bugs that take forever to debug are because someone made an assumption along the way that you have to find and correct.
If all functions perform as we expect, integration issues are drastically reduced. Good unit testing reduces the amount of integration and regression testing you have to do. Maintenance overall becomes easier because we have solid bulwarks of functionality, rather than needing to reason through all possible eventualities fresh every time (or worse, assuming).
Iâm not a huge believer in code coverage as a benchmark for quality. I think it can be helpful to have a minimal coverage requirement as youâre starting to remind yourself to write tests, but 100% coverage means absolutely nothing on its own. Quality is much more important than quantity when it comes to testing, especially that youâre testing the right things.
Keeping it simple also relates to abstractions. Code is a series of abstractions (unless youâre writing assembly, in which case, vaya con Dios), but Iâm referring specifically to abstractions in the codebase that you write. The cardinal sin of object-oriented programming is a simple rule: âDonât Repeat Yourself.â Itâs not ⌠bad advice, but neither is it a simple panacea we could automate away with, say, a linter or a formatter (or, god forbid, AI).
DRY is overemphasized, possibly because itâs such an easy heuristic to apply. âHey this looks like other codeâ is easy to see at a glance, and if you just have an automatic reaction of âI shouldnât repeat myself ever,â youâll automatically push that logic up to single method that can be used in multiple places.
But deduplication requires an abstraction. In most cases, youâre not performing exactly the same logic in two places, but two minor variations (or the same logic on two different types of objects). Those variations then require you to include a parameter, to account for a slight branch.
Having that abstracted method hinders comprehensibility. Even if itâs easier/faster to read a one-line reference to the abstracted method, the actual logic being performed now happens out-of-sight.
I am much less concerned with duplication of code than I am making sure we find the right abstraction. Thus, I want to propose a different model for thinking about repetition, two rules (because again, simpler != terse) to replace the one DRY rule: weâll call it the Concilio Corollary to the DRY rule, or the damp dyad.
Donât repeat yourself repeating yourself
The wrong abstraction will cost you more than repetition
DRYRY is a little tongue-in-cheek, but essentially donât worry about trying to find an abstraction until youâve implemented similar logic at least three times. Twice is a coincidence, three times is a pattern. Once youâve seen the code being used in three different places, you now have the context to know whether itâs a) actually doing the same work, and b) how to abstract it to work in different scenarios.
If you find yourself adding a parameter that changes the flow of logic in each scenario, itâs probably more correct to abstract only those small parts that are the same, and implement the particular logic in the context itâs being used. Thatâs how we find the right abstraction.
All of this is important because existing code has inertia. This is relevant whether youâre a more senior developer or just starting out in your career.
Those with less experience tend to be terrified to change existing code, and understandably so. That code already exists, itâs doing a job. Even if youâre going in to fix a bug, presumably that code was working well enough when it was written that no one noticed it. And no one wants to be the one to create an error, so the existing code is treated as something close to sacrosanct.
For more experienced developers, know that when youâre writing code youâre creating the building blocks of the application. Youâre setting the patterns that other developers later will try to implement, because itâs âcorrectâ and becomes that codebaseâs style. Heck, thatâs literally the predictability maxim - we want to it look similar when it does similar things. But that means if youâre writing the wrong abstraction in one place, its impact may not be limited to that single area.
And when a new case arises, the next developer has to decide (without the context of the person who originally wrote it) whether to modify the existing abstraction, or create a new one. But the old one is âcorrectâ (again, in that exists), so itâs safer to just use that one. Or, worst case, use it as a template to create a new abstraction. In either case, a new paradigm is being created that needs to be tested and raises the overhead on maintenance, because now we have a separate logic branch.
Those are the big topics I wanted to hit. The rest of these recommendations are important, but lack an overall theme. The biggest of these I want to discuss is commenting.
Naming should be used so we know the âwhatâ of code, comments should be used so we know the âwhy.â I am not referring to automated comments here (e.g., explanations for input parameters in the like in JSDoc), but rather qualitative comments. I would argue that, currently, most existing comments I see would be superfluous if proper naming conventions were used.
What I want to see in a comment is why a particular variable is a particular value, when itâs not clear from the existing context.
â Don’t explain the what
const SOCIAL_MEDIA_CHARACTER_COUNT = 116;
// shortens title for social media sharing
export const getSocialShareText = (post: BlogPost) => {
if (post.title.length =< SOCIAL_MEDIA_CHARACTER_COUNT) {
return post.title;
} else {
return post.title.substr(0,SOCIAL_MEDIA_CHARACTER_COUNT);
}
}
This a pretty typical example of what I see comments used for. Weâve used naming properly (the method gets the social share text, the constant is the character count we use for social media posts)j, so the comment âshortens title for social media sharingâ is superfluous.
This method provides the social media content. The piece of information I donât have about this code that I would like, both for comprehensibility and maintainability, is why the character count is 116.
The answer is that Twitter used to be the social media service with the shortest content length, 140 characters. Except that since weâre developing an app, weâre always including a URL, for which Twitter automatically generates a shortlink that takes up 23 characters (+ 1 for the space between content and link). 140-23-1 = 116.
That context does not exist within the application, and itâs not under our control. So we should include it in a comment, so that if that number changes (or something else becomes popular but has a shorter length limit, or we stop worrying about Twitter entirely), we know both from reading the code what this does, and it puts a signpost with the word âTwitterâ in the comment so it can be found if we just do a search.
â Explain the “why”
// Twitter has shortest character limit (140); URL shortener is always 23 + space
const SOCIAL_MEDIA_CHARACTER_COUNT = 116;
export const getSocialShareText = (post: BlogPost) => {
if (post.title.length =< SOCIAL_MEDIA_CHARACTER_COUNT) {
return post.title + ' ' + post.url;
} else {
return post.title.substr(0,SOCIAL_MEDIA_CHARACTER_COUNT) + ' ' + post.url;
}
}
The other thing to keep in mind about comments is that theyâre a dependency just as much as code. If we do update that character count, we also need to update the comment explaining it, otherwise weâve actively corrupted the context for the next person who has to make a change.
I used to say ânever use ternaries,â but Iâve come around a bit. I now believe ternaries should be used only declaratively, with proper formatting.
â Use declarative ternaries, with formatting
const title = (postRequest['title'])
? postRequest['title']
: '';
const title = postRequest['title'] || '';
Ternaries are short, concise, and difficult to reason about if theyâre too complicated. When I say âdeclarativeâ ternaries, I mean âthe value of a variable is one of two options, dependent upon a single condition.â
If you need to test multiple conditions, or if you have more than one variable changing as a result of a condition or set of conditions, donât use ternaries. Use regular if-else statements. Itâs easier to read and comprehend, and itâs easier to make changes down the road (more likely if already have multiple conditions or states).
And never nest ternaries.
The last bit is around testing, specifically standard library functions. A standard library function is one that comes packaged in with the programming language youâre using - think Math.round() for Javascript, or the above substring method on strings str.substr(0,3).
As a rule, you should not test the functionality of code you have no control over - if Chrome is shipping a bad Math.round(), there isnât anything you can do about it (plus, if you go down that rabbit hole long enough youâll eventually have to test that the heat death of the universe hasnât yet happened). Standard library functions fit that description.
But sometimes you do you want to test a method that only uses standard library functionality - the reason is not that youâre testing that functionality, but rather that youâre arriving at the desired result.
Weâll use the social media text as the example. I will always assume substring is working properly until I get user reports, and even then the most I would do is forward them along. What I want to test for is the length of the string that is returned - does it meet my requirements (under 116)? Iâm not testing the functionality, Iâm including a flag to myself and future developers that this is the maximum length and, if someone modifies the functionality of the method, it should be flagged.
describe('getSocialMediaText restricts to Twitter length', ()=> {
it('when title is less than length', () => {
expect(getSocialMediaText(MockPostShortTitle).length =< 116)
}),
it('when the title is more than length', () => {
expect(getSocialMediaText(MockPostLongTitle).length =< 116)
})
});
If we were testing functionality, I would call the same constant in my test, because thatâs whatâs being used internally. But because Iâm testing outcomes, I use an independent value. If someone changes the length without changing the test, theyâll get notified. They can at that point change the value used in the test, too, but the test has served its purpose - it notified someone when they violated my âwhy.â
TL;DR
Focus on specific aspects of code quality
Comprehensible, Predictable, Maintainable
Name stuff properly
Clarity over concision and wit
Keep things simple
One module, one class, one method, one variable: One job
Write tests
The only way to confidently modify or reuse code is be assured of what it does
Remember the damp dyad
Donât repeat yourself repeating yourself
The wrong abstraction costs more than repetition
Comments should explain “why”
Provide context for the next person (let naming focus on âwhatâ)
Q: What are best practices on implementing agile concepts for enterprise technology teams that are not app dev (e.g., DevOps, Cloud, DBA, etc.)?
A brief summary: 1) Define your client (often not the software’s end-user; could be another internal group), and 2) find the way to release iteratively to provide them value. This often requires overcoming entrenched models of request/delivery â similar to how development tends to be viewed as a “service provider” who gets handed a list of features to develop, I would imagine a lot of teams trying to make that transition are viewed as providers and expected to just do what they’re told. Working back the request cycle with the appropriate “client” to figure out how to deliver incremental/iterative value is how you can deliver successfully with agile!
Q: How do I convince a client who wants stuff at a certain time to trust the agile process?
There’s no inherent conflict between a fixed-cost SOW and scrum process. The tension that tends to exist in these situations is not the cost structure, but rather what is promised to be delivered and when. Problems ensue when you’re delivering a fixed set of requirements by a certain date - you can certainly do that work in a somewhat agile fashion and gain some of the benefits, but you’re ultimately setting yourself up to experience tension as you get feedback through iterations that might ultimately diverge from the original requirements.
This is the “change order hell” that often comes with client work â agile is by definition flexible in its results, so if we try to prescribe them ahead of time, we’re setting ourselves up for headaches. That’s not to say it’s not worth doing (the process may be beneficial to the people doing the work if the waterfall outcome is prescribed), but note (to yourself and the client) that a waterfall outcome (fixed set of features at a fixed date) brings with it waterfall risk, even if you do the work in an agile fashion.
It is unfortunately very often difficult, but this is part of the “organizational shift” I spoke about. If the sales team does not sell based on agile output, it’s very difficult to perform proper agile development in order the reap all its benefits.
Q: We’re using Agile well; How do we dissuade skip-level leadership from demanding waterfall delivery dates using agile processes?
This is very similar to the previous answer, with the caveat that it’s not on you to convince a level of leadership beyond your own manager of anything. You can and should be providing your manager with the information and advice mentioned in the above answer, but ultimately that convincing has to come from the people they manage, not levels removed. Scrum (and agile, generally) requires buy-in up and down the corporate stack.
Q: What are best practices for ownership of the product backlog?
Best practices are contextual! Ownership of the product backlog is such a tricky question.
In general, I think product backlogs tend to have too many items. I am very much a fan of expiring backlog items â if they haven’t been worked on in 30 days (two-ish sprints), they go away (system-enforced!) until the problem they address comes up again.
The product owner is accountable for the priority and what’s included or removed from the product backlog.
I kind of think teams should have two separate stores of stories: One is the backlog, specific ideas or stories that are going to be worked on (as above) in the next sprint or two), which is the product owner’s responsibility. The second is a brainstorming pool â preferably not even in the same system (because it is NOT the case that you should be just be plucking from the pool and plopping on the backlog). Rather, these are just broad ideas or needs we want to capture so we don’t lose sight of them, but from them, specific problems are identified and stories written. This should be curated by the product owner, but allow for easier/broader access to add to it.
Q: Is it ever recommended to have the Scrum Master also be Product Manager?
(I am assuming for the sake of this question that Product Manager = Product Owner. If I am mistaken, apologies!)
I would generally not recommend the product owner and the scrum master be the same person, though I am aware by necessity it sometimes happens. It takes a lot of varied skills to do both of those jobs, and in most cases if it happens successfully it’s because there’s a separate system in place to compensate in one or both areas. (e.g., there’s a separate engineering manager who’s picking up a lot of what would generally be SM work, or the product owner is in name only because someone else/external is doing the requirements- gathering/customer interaction). Both positions require a TON of work to perform properly - direct customer interaction, focus groups, metrics analysis and stakeholder interaction are just some of a PM’s duties, while the SM should be devoted to the dev team to make sure any blocks get cleared and work continues apace.
But even more than time, there’s a philosophical divide that would be difficult to resolve in one person. The SM should be looking at things from a perspective of what’s possible now, whereas the PM should have a longer-term view of what should be happening soon. Rare is the individual who can hold both of those things in their head with equal weight; usually one is going to be prioritized over the other, to the detriment of the process overall.
Q: What is the best (highest paying) Scrum certification?
If your pay is directly correlated with the specific certification you have, you are very likely working for the company that provides it. Specific certifications may be more favored in certain industries or verticals, but that’s no more than generally indicative of pay than the difference between any two different companies.
More broadly, I view certifications as proof of knowledge that should be useful and transferable regardless of specific situation. Much like Agile, delivering value (and a track record of doing same) is the best route to long-term career success (and hence more money).
Q: Can you use an agile scrum approach without a central staffing resource database?
Yes, with a but! You do not need a formal method of tracking your resourcing, but the scrum master (at the team level) needs to know their resourcing (in terms of how many developers are going to be available to work that sprint) in order to properly plan the sprint. If someone is taking a vacation, you need to either a) pull in fewer stories, b) increase your sprint length, or c) pull in additional resources (if availble to you).
Even at the story level, this matters. If you have a backend ticket and your one BE developer is out, you’re not gonna want to put that in the sprint. But it doesn’t need to be a formal, centralized database. It could be as simple as everyone noting their PTO during sprint planning.
Solutions come in all sizes. The problem in tech (and many other industries, I presume) is that our processes and workflows are structured in such a way that the solutions for a given problem tend to be clustered around the smaller side of the scale.
Consider any given bug. Reported (hopefully) by your QA team or, worst-case, by a customer in production, it points out a specific issue. You, the developer, are tasked with devising a solution. Now, in most shops youâll be given the opportunity to work out the root cause, ensuring that whatever change you make will a) actually fix the problem, and b) not cause any other immediate problems.
And that makes sense, for the most part. Small issues have small solutions. The problem is when you donât step back and take a bigger-picture view of the situation - do all of these disparate problems actually stem from a particular source? Very often, developers are not only encouraged but actually mandated to stick to whatever story theyâre on, for fear of going out of scope.
While that might make sense from a top-down control perspective, that style of thinking tends to permeate a lot of the other work that gets done, even up to larger-scale issues. Diversity is left to HR, or to a diversity committee, to take care of. In many cases, how and where to include AI in an application is left up to individual departments or teams. Remote work, a topic extremely divisive of late, is being eliminated or limited left up to âmanager discretionâ rather than actually looking at the benefits and harms that are associated with it. A cause extremely close to my heart, accessibility, is frequently treated as an add-on or left up to a handful of specialists to implement (or, worse, a third-party plugin).
These things not only donât have to, they shouldnât be left up to small groups to implement or reason through. They should be baked-in to how your organization makes decisions, builds software and interacts with its people.
You need a holistic approach. I want to break these concepts out of silos. If we’re looking at a RACI chart, everyone is responsible for DEIB and accessibility. Everyone should be consulted and accountable for decisions about AI and remote work.
Now, I have a confession. I’m pretty sure it’s Steve Jobsâ Second Law of Product that any time you think you have an insight, you have to give it a fancy name. I am guilty of this as well.
I use the term âholistic techâ to talk about the convergence of these ideas. A lot of the specific things I’m talking about can be found in other systems or methodologies; I’m just trying to pull all the threads together so we can hopefully weave something useful about it. In the same way that responsive design was concerned with making sure you could use a product across all screen sizes, I want to make sure that (and here’s the subtitle) tech works for everybody.
I’m also gonna borrow some concepts from universal design. Universal design is the concept that, “the design and composition of an environment so that it can be accessed, understood and used to the greatest extent possible by all people regardless of their age, size, ability or disability.”
And last, we’ll also fold in some concepts of human-centered design. This, in a nutshell, is thinking beyond your optimal user story. Eric Meyer calls them “stress cases," as opposed to edge cases, where you consider the emotional, physical and mental state of your user, rather just concerning yourself with the state of your application.
But all of these, as implied with the word “design,” are focused primarily on product creation. And while I do want to incorporate that, it’s a part of how we work.
Basically, this whole idea boils down to a single word
EMPATHY
It’s about seeing other people as, well, people.
And it’s applicable up and down your company stack. It applies to your employees, your boss, your monetization strategy (specifically, not using dark patterns), and it’s especially about your communication, both within your organization and with your users.
As for product design, we’ll start with accessibility.
Very broadly, accessibility is concerned with making sure that everyone can access your content and product. On the web side of things, this typically is accomplished by trying to adhere to the Web Content Access Guidelines, or WCAG.
WCAG has four basic principles:
The first is that content should be perceivable, which relates to multi-sensory content and interfaces. Essentially, you should still be able to access the fundamental value of the content even if you cannot engage with its primary medium; the common examples here are alt text for images or captions for videos.
The second principle is operable: Users must be able to operate user interface controls in multiple modalities. The most common example of this is keyboard navigability; there are several requirements around people being able to video controls or manipulate modals without using the mouse (or touch).
The third principle is understandable: Text needs to be readable and understandable, and user interface elements should behave in predictable ways. Headers should always act like headers.
The last principle is robustness, which amounts to future-proofing. Make sure you adhere to the specs so that future products that are trying to parse your content know they can do so in a coherent manner.
Now the interesting thing is, I don’t think many people would object to those principles in, well, principle. They seem pretty common-sensical? “I want people to be able to access my content” is a fairly unobjectionable statement. The problem is that most organizations don’t have a good sense for accessibility yet, so the projects are designed and budgeted without the specific accessibility implementations. Then, when it gets brought up, making the change would be “too expensive,” or it would “take too long.”
“And besides, it’s an insignificant part of our market anyway.” I cannot tell you how many times I’ve heard this argument. Whether it’s an intranet (“we don’t have that many disabled people working here”) or an internal training video (âthere arenât that many blind workersâ) or a consumer-facing product (“we’re willing to live without that tiny part of the market”), there’s a sense that accessibility is only for a very small subset of the population.
My favorite group accessibility experiment is to ask people to raise their hand if they use an accommodation.
Then, I ask them to raise a hand if they wear glasses, contacts, or a hearing aid.Or if they don’t keep your monitor at full resolution (“less space,” on Macs). Or if they ever change their browser’s or IDE’s zoom level.
Those are all accessibility accommodations.
Because the truth of the matter is, we’re all just temporarily abled. I donât ask for hands on this one don’t, but Iâll often ask if anyoneâs ever bought something on eBay while drunk. Formally speaking, you are technically operating with a cognitive impairment when you bought that giant taco blanket on Amazon. And I’m willing to bet your fine motor skills weren’t quite up their usual par, either.
Or maybe you sprained your wrist, or broke a finger. That’s a loss of fine motor control that’s going to make it more difficult to operate the mouse, even if only for a few weeks. Or how about any kind of injury or chronic pain that makes it painful to sit in a chair for long periods? Willing to bet after 4 hours you’re not thinking as clearly or as quickly as you were during hour 1.
Some of these things, like neurodivergence or vision impairment or being paralyzed, can be permanent conditions. But just as many of them aren’t. And it’s important to keep that in mind, because even if your ideal user story is a 34-year-old soccer mom, chances are she’s going to have some sort of cognitive impairment (lack of sleep, stress about kids) or processing difference (trying to juggle multiple things at the same time) or fine motor skills (trying to use your mobile app on the sidelines during December) at some point. So ignoring accessibility doesnât just disenfranchise the âsmallâ portion of your users who are visibly permanently disabled, it’s making things more difficult for potentially all of your users at some point or another.
And as it turns out, adding accessibility features can actually grow your overall market share.
Imagine your first day at NewTube, the hottest new video app on the market. We’re looking to change the world ⌠by letting people upload and watch videos. I donât know, venture capital! Anyway, the number of humans on the internet is 5.19 billion, so thatâs our addressable market. We donât need the microscopic share that would come from adding accessibility features.
Or do we?
Standard accessibility features for videos include text transcripts of the words spoken aloud in the video. The primary intention behind these is to ensure that those with hearing impairments can still understand whatâs going on in the video. In a past job, proper captions cost somewhere in the range of $20+ per minute of video, though some products such as YouTube now have AI autocaptioning thatâs getting pretty good.
Another standard feature is an audio description track (and transcript). This is sort of like alt text for video â it describes the images that are being shown on the screen, in order to make that information comprehensible to someone with visual impairments.
Audio description, on the other hand, would look something like this:
Present-day Rose walks to the bow of the research ship, which is deserted. Deep in thought, she climbs the railing and stares down at the water where the Titanic rests below. She opens one hand to reveal the Heart of the Ocean diamond. We flash back to 17-year-old Rose standing on the deck of the Carpathia, digging her hands into Calâs overcoat and finding the diamond. Present day Rose shakes her head and, with a small gasp, sends the diamond to rest where it should have been some 80 years earlier.
I took some poetic license there, but thatâs kind of the point of audio description â youâre not a court reporter transcribing whatâs being said, youâre trying to convey the emotion and the story for those who canât see the pictures. The transcript part isnât technically a requirement, but since you typically have to write down the script for the AD track anyway, it tends to be included. To my knowledge, no oneâs managed to get AI to do this work for them in any usable fashion.
Lastly, we have keyboard navigability. Being able to interact with and control the site just using a keyboard makes it easy for those without fine motor control (or who use screen readers) to easily find their way around.
Three features/feature sets. The first two are pretty expensive - weâve either got to pay for or develop an AI service to write the transcriptions, or we have to make sure theyâre available. Audio Descriptions are going to be a cost to us, regardless, and not a cheap one. Keyboard navigability could be built-in to the product, but it would be faster if we could just throw everything together in React and not have to worry about it.
How much of an impact could it have on our audience?
Well, though only 2-3 children out of 1000 are born with hearing impairment, by age 18 the percentage of Americans who complain of at least partial hearing loss rises to about 15%. So if we donât have captions, weâd better hope all our videos are Fail compilations, or weâre going to see some steep drop-offs.
When it comes to vision, itâs even worse. Approximately a billion people in the world have a vision impairment that was not prevented or has not been addressed. Even assuming significant overlap with the hearing impairment group, weâll use 750,000,000, for a total of 10.8 percent.
Which leaves us 70% of our addressable market, or 3.63 billion.
Now obviously these numbers are not exact. Weâre very back-of-the-napkin here, but I would also argue that a real-world scenario could just as easily see our percentages of accommodation-seekers go up as down. The number of temporary cases of all of these items, the fact that first-world countries have higher prevalence of RSI (though much better numbers for vision impairment) mean that this 70% number is probably not as far away from reality as we think.
And even beyond people who need those accommodations, what about those who simply want them?
My best friend watches TV with the captions on all the time because itâs easier for her to follow along, and sheâs not alone. Netflix says 40% of global users watch with captions, to say nothing of public exhibitions like bars (where itâs often not legally permissible to have the sound on).
Transcripts/audio descriptions are often HUGE boons to SEO, because youâre capturing all your content in a written, easily search-indexable format.
And presumably youâve used a video app on a TV. The app has already been designed to be used with directional arrows and an OK button - why not extend that to the desktop? Youâll notice the remoteâs functionality is a subset of a keyboard, not a mouse. Boom, keyboard navigation.
So, to recap accessibility: Good for disabled users. Good for abled users. Good for business. And thatâs the thing, taking a holistic approach to how we do tech should actually make everyone better off. It is the rising tide.
But letâs talk about the looming wave that overshadows us all. I speak, of course, of artificial intelligence. In the same way that software ate the world 15 years ago, and Bitcoin was going to replace all our dollars, artificial intelligence is going to eat all our software and all the dollars we software developers used to get paid.
I want to make clear up front that I am not an AI doomsayer. I donât think weâre (necessarily) going to get Skynetted, and if we are itâs certainly not going to be ChatGPT. Artificial intelligence in its current form is not going to enslave us, but I do think large swaths of the population will become beholden to it â just not in the same way.
Similar to how algorithms were used in the 90s and 2000s to replace human decision-making, I think AI is going to be (ab)used in the same manner. Weâve all called in to a customer support line only to find that the human on the other end is little more than a conduit between âthe systemâ and us, and the person canât do anything more to affect the outcome than we can.
With AI, weâre just going to skip the pretense of the human and have the AI decipher what it thinks you said, attempt remedies within the limits of what itâs been programmed to allow, and then disconnect you. No humans (or, likely, actual support) involved.
Is that the worst? Maybe not in all cases. But itâs also, in a lot of cases, going to allow these organizations to skip what should be important work and just let the AI make decisions. Iâm much less concerned about SkyNet than I am the Paperclip Maximizer.
The paperclip maximizer is a thought experiment proffered by Nick Bostrom in 2003. He postulated that an AI given a single instruction, âMake as many paperclips as possible,â would/should end with the destruction of the entire earth and all human life. The AI is not given any boundaries, and humans might switch the machine off (thus limiting the number of paperclips), so the AI will eventually eliminate humans. But even if the AI thinks us benign, at some point the AI consumes all matter on the earth aside from humans, and we are just so full of wonderfully bendable atoms that could be used for more paperclips.
The âthought processesâ of generative AIs, as currently constructed, are inherently unknowable. We know the inputs, and we can see the outputs when we put in a prompt, but we canât know what theyâre going to say - thatâs where the special sauce âthinkingâ comes in. We try to control this by introducing parameters, or guidelines, to those prompts to keep them in line.
And I know you might think, âWell, weâll tell it not to harm humans. Or animals. Or disrupt the existing socio-political order. Or âŚâ And thatâs actually a separate angle to attack this problem - humans not giving the proper parameters. At a certain point though, if you have to control for the entire world and its infinite varieties of issues, isnât it easier to just do the work yourself? Weâve already got a lackluster track record in regard to putting reliable guardrails around AI, as the Bing Image Generatorâs output so thoughtfully proves.
One of the things computer nerds love to do more than anything is break new tech, and image generators are no exception. When it introduced a new image generation tool a while back, though Bing did restrict uses of the phrase â9/11” or âSeptember 11,â it still allowed for image generations of âSpongebob flying an airliner into New York in 2000.â And of course, the most prominent image of New York in 2000 is likely going to include the World Trade Center.
Sure, Spongebob doing 9/11 is a brand hit to Nickelodeon and insulting to the victimsâ families. But this is showing both failures - despite Bingâs overwhelming image consciousness that should have been baked into a model, the model thought it more important to generate this image than to not. And, separately, Bing failed to put proper safeguards into the system.
So yes, the paperclips are a hyperbolic hypothetical, but if thereâs one thing that capitalism has taught us itâs that there are companies out there who care more about the next dollar than anything else.
Businesses large and small make decisions based on weighing costs versus expected benefits of a given option all the time. Famously, with the Ford Pinto, one of the analyses Ford conducted cited the overall cost of redesigning fuel safety systems vs. the general cost to society of the fatal car crashes that might be spared. Because, to Ford, individual deaths were not thought of as particularly tragic. They were just numbers. It does not seem unreasonable to assume AI systems will be misused by those who are unscrupulous in addition to those who are just oblivious.
In accessibility, most people think the cost of not being accessible is âwell, how likely are we to get sued?â ignoring the benefits of people using the product more. With AI, this short-sighted calculus can come into play where âOh, weâll let the AI do it, and not have to pay a person!â Except, as weâve pointed out, the AI probably isnât very good and the cost comes in consumer goodwill.
And this doesnât even touch things like source data bias, which is a huge issue in resume-reviewing AIs (whose datasets will cause the AI to be more likely to select for existing employees, exacerbating skewed hiring trends) and predictive policing algorithms (which exacerbate existing crime biases).
Donât forget you can now convincingly generate human-sounding responses in astroturfing campaigns or review spoofing, or empower scammers previously held back by non-native English suddenly sounding like every corporate communication (because AIâs probably writing those communiques, too).
Remember the part where I said Iâm not an AI doomsayer? Iâm really not! I think AI can be used in a lot of unique and interesting applications to make things better. We just need to be more judicious about how we employ it, is all.
For example, in the medical field, there are numerous AI experiments around trying to find tumors from body scans; the AI is not notifying these people on its own, there are doctors who review flagged scans for closer examination. Or in drug trials, companies are using AI to imagine new shapes of proteins that will then have lots of trials and study before theyâre ever put in a test subject.
Using AI to generate advice that is then examined by humans for robustness is a great application of the tool. And sure, if Amazon wants to use AI to suggest product recommendations, I guess go ahead. It canât be any worse than its current system of, âOh, you bought a refrigerator? I bet you also want to buy several more.â
But that âgenerationâ word is a sticking point for me. To the point of the job applicant winnowing, I have no problem with using quantitative questions to weed out applicants (do you have x years of experience, boolean can you work in the US), but I would hesitate to let a black-box system make decisions even as small as who should be considered for hiring based on inherently unknowable qualifications (as would be the case with the application of actual AI versus just algorithmic sifting).
And finally, just limit the use of generated content in general. Reaching back into my accessibility bag for a minute, thereâs a class of images that per spec donât need alt text: Images that are âpurelyâ decorative and not conveying information. The question I always ask in such cases is: If the image is really providing no value to the user, do you really need it?
The same would go for unedited generated content. If youâre sending a communication that can be wholly generated by a computer, do you really need to send it? Weâre taking the idea of âthis meeting could have been an emailâ even further down the stack: Could that email in fact just be a Slack message, or better yet a reaction emoji? Just because you can expand your one-sentence idea more easily with AI doesnât you have to or even should.
Thereâs likely a place for generated content, but itâs not anywhere near where weâre using it for now, with AI-generated ânewsâ articles or advertising campaigns. Itâs like when we just tried to add accessibility âwith a buttonâ - you cannot just throw this stuff out there and hope itâs good enough.
And I would hope it would go without saying, but please donât replace therapists or lawyers or any other human who considers ethics, empathy, common sense or other essentially human traits with AI.
This is along the same lines as âgenerate advice not decisions,â - if you need to talk to the AI in order to be comfortable sharing things with a live person, that makes total sense. But donât use the AI as a 1:1 replacement for talking to a person, or getting legal advice.
AI recap: Good for advice, not decisions. Good for assisting people, not replacing them (itâs a tool, not the mechanic). It can be good for business.
Now, I think at this point you can pretty much guess what Iâm gonna say about remote work. And thatâs good! Both because this is already long enough and because âholistic techâ is supposed to be a framework, not just specific actionable items.
Remote work, of course, is the idea that you need not be physically present in a building in order to perform a job. Hybrid work is a mix of remote work with some time spent in the office. Iâm not gonna try sell you hard on either option - but I will note that employees prefer flexibility and employers tend to enjoy the larger talent pool. But mostly, I want to talk about how to set up your organization for success in the event you choose one of them.
One of the issues when you have some people in the office and others who arenât is the sense that employees in the office are prioritized above those who are remote. Some of this is understandable â if the company wants to incentivize people to come into the office by offering, for example, catered lunches once a week or something, I wouldnât see that as something that those who arenât attending are missing out on âŚ. Unless they were hired as fully remote.
In my case, for example, company HQ is in Chicago; I live in Phoenix, Arizona. I was hired fully remote, and it would feel to me like I were a lesser class of employee if those in the Chicago area were regularly incentivized with free lunches when thereâs no pragmatic way for me to partake. Luckily, our office uses a system where everyone gets the same amount of delivery credit when we have whole-office lunches, which allows all of us to feel included.
Beyond incentives, though, is the actual work being done, and this is where I think some teams struggle. Especially when it comes to meetings, the experience of the remote attendee is often an afterthought. This can take the forms of whiteboarding (literally writing on a whiteboard off-camera in the room), crosstalk or side discussions that arenât in listening range of the microphone, or showing something on a screen physically thatâs not present virtually.
Itâs not just âdonât punish your remote team members for being remote,â youâre actually hurting the organization as a whole. Presumably every member of the team was hired with an eye to what they can bring to the table; excluding them, or not giving them the full information, hurts everyone involved.
And technological solutions for remote workers will benefit in-person workers as well! Talking into the microphone during meetings can help someone with cochlear implants hear better in the room just as much as itâll help me sitting in my garage office 1200 miles away. Same goes for whiteboarding - having a Google Jam (is that where theyâre called anymore? Bring back the Wave!) on their screen means my wife can actually follow along; if they have to read a whiteboard from even 14 feet away, theyâll lose track of whatâs going on in the meeting.
Taking the time to plan for how the remote attendeeâs experience helps everyone, and itâs not terribly difficult to do. You can even see it for yourself by simply attending the meeting from another room to give you perspective and help troubleshoot any issues. Part and parcel of this, of course, is investing in the tools necessary to make sure everyone can interact and collaborate on the same level.
Itâs not all about managers/employers, though! Remote employees tend to think that remote work is just like being in the office, only they donât have the commute. And while thatâs true to some extent, thereâs another crucial aspect that many of them are missing: Communication.
You have to communicate early and often when youâre remote for the simple reason that no one can come check up on you. No one can look over your shoulder to see if youâre struggling, no one knows intuitively what your workload looks like if youâre overloaded. Similarly, you donât know what impacts your coworkerâs commit is going to have unless you ask them. There are any number of tools and video sharing apps and all that, but the upshot is you actually have to make focused efforts to use them to make sure everyoneâs rowing in the same direction.
Remote work: good for employees, good for employers. Good for business.
Finally, letâs talk diversity. Commonly abbreviated DEI, or DEIB, diversity, equity, inclusion and belonging has sort of morphed from âletâs make sure our workforce looks diverseâ to âletâs make sure people of different backgrounds feel like they have a place here.â
And thatâs because DEIB should be a culture, not an initiative. At the start, we talked about silos vs. intersectionality. This might start with a one-off committee, or an exec hire, but true DEIB is about your entire culture. Just like remote work canât just be HRâs problem, and AI decisions shouldnât be made solely by the finance team, DEIB needs to come from the entire organization.
I actually like the addition of the B to DEI because Belonging is a pretty good shorthand for what weâve been discussing throughout. People who are temporarily or permanently disabled are provided the accommodations they need to succeed and thrive; programmers arenât worried AI is going to be used to replace them, but instead given to them as a tool to increase their productivity. Remote workers feel like the company values them even in a different state.
DEIB should encompass all those things, but it canât be left up to just a committee or an exec or even a department to account for it. It all falls on all of us.
And I specifically donât want to leave out the traditional aspects of diversity, especially in tech culture. Minorities of all kinds â women, nonbinary folks, other gender identities, those of different sexual orientations, non-white racial backgrounds â are underrepresented in our industry, and itâs important that we keep up the work required to make sure that everyone is given the same access and opportunities.
Itâs good for business, too! Having a diverse array of perspectives as you develop products will give you ideas or user stories or parameters a non-diverse group might never have thought of. We keep hearing stories about VR headsets that clearly werenât designed for people with long hair, or facial recognition algorithms that only work for those with lighter skin tones. If your product serves everybody, your product will be used by more people. Thatâs basic math!
Recent court rulings have put a damper on what used to be the standard for diversity, a âquotaâ of either applicants or hires meeting certain criteria. And look, if your organization was hiring just to meet a metric, you didnât have true diversity. Quotas donât create a culture of inclusion, so them going away shouldnât cause that culture to dissipate, either. Seek out diverse upstreams for your hiring pipeline, ensure youâre not just tapping the same sources. I promise you, that investment will provide a return.
Say it with me: DEIB is good for employees, good for employers, and itâs good for business.
TLDR: Have empathy. Make sure you consider all aspects of decisions before you make them, because very often taking the personhood of the other party into account is actually the best business move as well.
And with all of these, please note that when I say these are good for employees, good for employers and, especially, âgood for businessâ requires these ideas to be executed well. Doing it right means taking as many of these factors into account as you can. This is where holistic tech comes in as our overarching concept.
When it comes to accessibility, the more you have, the more customers you can reach and the more options you give them. With a long lens, that tends to mean you wind up with more money.
When youâre considering applications for artificial intelligence, try to keep its influence to advice rather than making decisions, and consider the work that would need to be done in order to implement the solution without AI â if itâs not work youâre willing to do, is it worth doing at all, AI or no?
With remote work, you need to invest the time and resources to ensure your remote employees have the tools to communicate, while employees need to invest the time and energy to actually communicate.
Finally, diversity and belonging are about your culture, not a committee or a quota. Invest in it, and youâll reap rewards.
OK, we need to talk about OREOs … and how they impacted my view of product iteration.
(Sometimes I hate being a software developer.)
I’m sure you’ve seen the Cambrian explosion of Oreo flavors, the outer limits of which were brought home to me with Space Dunks - combining Oreos with Pop Rocks. (And yes, your mouth does fizz after eating them.)
Putting aside the wisdom or sanity of whoever dreamt up the idea in the first place, it’s clear that Oreo is innovating on its tried-and-true concept â but doing so without killing off its premier product. There is certainly some cannibalization of sales going on, but ultimately it doesn’t matter to Nabisco because a) regular Oreos are popular enough that you’ll never kill them off completely, and b) halo effect (your mom might really love PB oreos but your kid hates them, so you now you buy two bags instead of one!)
In software, we’re taught that the innovator’s dilemma tends to occur when you’re unwilling to sacrifice your big moneymaker in favor of something new, and someone else without that baggage comes along eats your cookies/lunch.
Why can’t you do both?
There are a number of different strategies you could employ, from a backend-compatible but disparate frontend offering (maybe with fewer features at a cheaper cost, or radically new UX). What about a faux startup with a small team and resources who can iterate on new ideas until they find what the market wants?
But the basic idea remains the same: Keep working away at the product that’s keeping you in the black, but don’t exclude experimentation and trying new approaches from your toolkit. Worst-case scenario, you still have the old workhorse powering through. In most cases, you’ll have some tepid-to-mild hits that diversify your revenue stream (and potentially eat at the profit margins of your competitors) and open new opportunities for growth.
And every once in a while you’ll strike gold, with a brand-new product that people love and might even supplant your tried-and-true Ol' Faithful.
The trick then is to not stop the ride, and keep rolling that innovation payoff over into the next new idea.
You know it’s a good sign when the first thing I do after finishing an article is double-check whether the whole site is some sort of AI-generated spoof. The answer on this one was closer than you might like, but I do think it’s genuine.
Jakob Nielsen, UX expert, has apparently gone and swallowed the AI hype by unhinging his jaw, if the overall subjects of his Substack are to be believed. And that’s fine, people can have hobbies, but the man’s opinions are now coming after one of my passions, accessibility, and that cannot stand.
This gif pretty much sums up my thoughts after a first, second and third re-read.
I got mad at literally the first actual sentence:
Accessibility has failed as a way to make computers usable for disabled users.
Nielsen’s rubric is an undefined “high productivity when performing tasks” and whether the design is “pleasant” or “enjoyable” to use. He then states, without any evidence whatsoever, that the accessibility movement has been a failure.
Accessibility has not failed disabled users, it has enabled tens of millions of people to access content, services and applications they otherwise would not have. To say it is has failed is to not even make perfect the enemy of the good; it’s to ignore all progress whatsoever.
I will be the first to stand in line to shout that we should be doing better; I am all for interfaces and technologies that help make content more accessible to more people. But this way of thinking skips over the array of accessible technology and innovations that have been developed that have made computers easier, faster and pleasant to use.
For a very easy example, look at audio description for video. Content that would have been completely inaccessible to someone with visual impairments (video with dialogue) can now be understood through the presentation of the same information in a different medium.
Or what about those with audio processing differences? They can use a similar technology (subtitles) to have the words that are being spoken aloud present on the video, so they more easily follow along.
There are literally hundreds, if not thousands of such ideas (small and large) that already exist and are making digital interfaces more accessible. Accessibility is by no means perfect, but it has succeeded already for untold millions of users.
The excuse
Nielsen tells us there are two reasons accessibility has failed: It’s expensive, and it’s doomed to create a substandard user experience. We’ll just tackle the first part for now, as the second part is basically just a strawman to set up his AI evangelism.
Accessibility is too expensive for most companies to be able to afford everything thatâs needed with the current, clumsy implementation.
This line of reasoning is absolute nonsense. For starters, this assumes that accessibility is something separate from the actual product or design itself. It’s sort of like saying building a nav menu is too expensive for a company to afford - it’s a feature of the product. If you don’t have it, you don’t have a product.
Now, it is true that remediating accessibility issues in existing products can be expensive, but the problem there is not the expense or difficulty in making accessible products, it’s that it wasn’t baked into the design before you started.
It’s much more expensive to retrofit a building for earthquake safety after it’s built, but we still require that skyscrapers built in California not wiggle too much. And if the builders complain about the expense, the proper response is, “Then don’t build it.”
If you take an accessible-first approach (much like mobile-first design), your costs are not appreciably larger than ignoring it outright. And considering it’s a legal requirement for almost any public-facing entity in the US, Canada or EU, it is quite literally the cost of doing business.
A detour on alt text
As an aside, the above image is a good example of the difference between the usability approach and the accessibility approach to supporting disabled users. Many accessibility advocates would insist on an ALT text for the image, saying something like: âA stylized graphic with a bear in the center wearing a ranger hat. Above the bear, in large, rugged lettering, is the phrase “MAKE IT EASY.” The background depicts a forest with several pine trees and a textured, vintage-looking sky. The artwork has a retro feel, reminiscent of mid-century national park posters, and uses a limited color palette consisting of shades of green, brown, orange, and white.â (This is the text I got from ChatGPT when I asked it to write an ALT text for this image.)
On the other hand, I donât want to slow down a blind user with a screen reader blabbering through that word salad. Yes, I could â and should â edit ChatGPTâs ALT text to be shorter, but even after editing, a description of the appearance of an illustration wonât be useful for task performance. I prefer to stick with the caption that says I made a poster with the UX slogan âKeep It Simple.â
The point of alt text is to provide a written description of visual indicators. It does NOT require you to describe in painstaking detail all of the visual information of the image in question. It DOES require you to convey the same idea or feeling you were getting across with the image.
If, in the above case, all that is required is the slogan, then you should not include the image on the page. You are explicitly saying that it is unimportant. My version of the alt text would be, “A stylized woodcut of a bear in a ranger hat evoking National Park posters sits over top of text reading “Make it easy.””
Sorry your AI sucks at generating alt text. Maybe you shouldn’t rely on it for accessibility because it true accessibility requires ascertaining intent and including context?
The easy fix
Lolz, no.
The “solution” Nielsen proposes should be no surprise: Just let AI do everything! Literally, in this case, he means “have the AI generate an entire user experience every time a user accesses your app,” an ability he thinks is no more than five years away. You know, just like how for the past 8 years full level 5 automated driving is no more than 2-3 years away.
Basically, the AI is given full access to your “data and features” and then cobbles together an interface for you. You as the designer get to choose “the rules and heuristics” the AI will apply, but other than that you’re out of luck.
This, to be frank, sounds terrible? The reason we have designers is to present information in a coherent and logical flow with a presentation that’s pleasing to the eye.
The first step is the AI will be … inferring? Guessing? Prompting you with a multiple choice quiz? Reading a preset list of disabilities that will be available to every “website” you visit?
It will then take that magic and somehow customize the layout to benefit you. Oddly, the two biggest issues that Nielsen writes about are font sizes and reading level; the first of which is already controllable in basically every text-based context (web, phone, computer), and the second of which requires corporations to take on faith that the AI can rewrite their content completely while maintaining any and all style and legal requirements. Not what I’d bet my company on, but sure.
But my biggest complaint about all of this is it fails the very thing Nielsen is claiming to solve: It’s almost certainly going to be a “substandard user experience!” Because it won’t be cohesive, there have literally been no thought into how it’s presented to me. We as a collective internet society got fed up with social media filter bubbles after about 5 years of prolonged use, and now everything I interact with is going to try to be intensely personalized?
Note how we just flat-out ignore any privacy concerns. I’m sure AI will fix it!
I really don’t hate AI
AI is moderately useful in some things, in specific cases, where humans can check the quality of its work. As I’ve noted previously, right now we have not come up with a single domain where AI seems to hit 100% of its quality markers.
But nobody’s managed to push past that last 10% in any domain. It always requires a human touch to get it “right.”
Maybe AI really will solve all of society’s ills in one fell swoop. But instead of trying to pivot our entire society around that one (unlikely) possibility, how about we actually work to make things better now?
Today I want to talk about data transfer objects, a software pattern you can use to keep your code better structured and metaphorically coherent.
Iâll define those terms a little better, but first I want to start with a conceptual analogy.
It is a simple truth that, no matter whether you focus on the frontend, the backend or the whole stack, everyone hates CSS.
I kid, but also, I donât.
CSS is probably among the most reviled of technologies we have to use all the time. The syntax and structure of CSS seems almost intentionally designed to make it difficult to translate from concept to âcode,â even simple things. Ask anyone whoâs tried to center a div.
And there are all sorts of good historical reasons why CSS is the way it is, but most developers find it extremely frustrating to work with. Itâs why we have libraries and frameworks like Tailwind. And Bulma. And Bootstrap. And Material. And all the other tools we use that try their hardest to make sure you never have to write actual while still reaping the benefits of a presentation layer separate from content.
And we welcome these tools, because it means you donât need to understand the vagaries of CSS in order to get what you want.
Itâs about developer experience, making it easier on developers to translate their ideas into code.
And in the same way we have tools that cajole CSS into giving us what we want, I want to talk about a pattern that allows you to not worry about anything other than your end goal when youâre building out the internals of your application. Itâs a tool that can help you stay in the logical flow of your application, making it easier to puzzle through and communicate about the code youâre writing, both to yourself and others. Iâm talking about DTOs.
DTOs
So what is a DTO? Very simply, a data transfer object is a pure, structured data object - that is, an object with properties but no methods. The entire point of the DTO is to make sure that youâre only sending or receiving exactly the data you need to accomplish a given function or task - no more, no less. And you can be assured that your data is exactly the right shape, because it adheres to a specific schema.
And as the âtransferâ part of the name implies, a DTO is most useful when youâre transferring data between two points. The title refers to one of the more common exchanges, when youâre sending data between front- and back-end nodes, but there are lots of other scenarios where DTOs come in handy.
Sending just the right amount of data between modules within your application, or consuming data from different sources that use different schemas, are just some of those.
I will note there is literature that suggests the person who coined the term, Martin Fowler, believes that you should not have DTOs except when making remote calls. Heâs entitled to his opinion (of which he has many), but I like to reuse concepts where appropriate for consistency and maintainability.
The DTO is one of my go-to patterns, and I regularly implement it for both internal and external use.
Iâm also aware most people already know what pure data objects are. Iâm not pretending weâre inventing the wheel here - the value comes in how theyâre applied, systematically.
Advantages
For DTOs are a systematic approach to managing how your data flows through and between different parts of your application as well as external data stores.
Properly and consistently applied, DTOs can help you maintain what I call metaphorical coherence in your app. This is the idea that the names of objects in your code are the same names exposed on the user-facing side of your application.
Most often, this comes up when weâre discussing domain language - that is, your subject-matter-specific terms (or jargon, as the case may be).
I canât tell you the number of times Iâve had to actively work out whether a class with the name of âpostâ refers to a blog entry, or the action of publishing an entry, or a location where someone is stationed. Or whether âclassâ refers to a template for object creation, a group of children, or oneâs social credibility. DTOs can help you keep things organized in your head, and establish a common vernacular between engineering and sales and support and even end-users.
It may not seem like much, but that level of clarity makes talking and reasoning about your application so much easier because you donât have to jump through mental hoops to understand the specific concept youâre trying to reference.
DTOs also help increase type clarity. If youâre at a shop that writes Typescript with âanyâ as the type for everything, you have my sympathies, and also stop it. DTOs might be the tool you can wield to get your project to start to use proper typing, because you can define exactly what dataâs coming into your application, as well as morphing it into whatever shape you need it to be on the other end.
Finally, DTOs can help you keep your code as modular as possible by narrowing down the data each section needs to work with. By avoiding tight coupling, we can both minimize side effects and better set up the code for potential reuse.
And, as a bonus mix of points two and four, when you integrated with an external source, DTOs can help you maintain your internal metaphors while still taking advantage of code or data external to your system.
To finish off our quick definition of terms, a reminder that PascalCase is where all the words are jammed together with the first letter of each word capitalized; camelCase is the same except the very first letter is lowercase; and snake case is all lowercase letters joined by underscores.
This is important for our first example.
Use-case 1: FE/BE naming conflicts
The first real-world use-case weâll look at is what was printed on the box when you bought this talk. That is, when your backend and frontend donât speak the same language, and have different customs they expect the other to adhere to.
Trying to jam them together is about as effective as when an American has trouble ordering food at a restaurant in Paris and compensates by yelling louder.
In this example, we have a PHP backend talking to a Typescript frontend.
I apologize for those who donât know one or both languages. For what itâs worth, weâll try to keep the code as simple as possible to follow, with little-to-no language-specific knowledge required. In good news, DTOs are entirely language agnostic, as weâll see as we go along.
Backend
class User
{
public function __construct(
public int $id,
public string $full_name,
public string $email_address,
public string $avatar_url
){}
}
Per PSR-12, which is the coding standard for PHP, class cases must be in PascalCase, method names must be implemented in camelCase. However, the guide âintentionally avoids any recommendationâ as to styling for property names, instead just choosing âconsistency.â
Very useful for a style guide!
As you can see, the project weâre working with uses snake case for its property names, to be consistent with its database structure.
Frontend
class User {
userId: number;
fullName: string;
emailAddress: string;
avatarImageUrl: string;
load: (userId: number) => {/* load from DB */};
save: () => {/* persist */};
}
But Typescript (for the most part, thereâs not really an âofficialâ style guide in the same manner but most your Google, your Microsofts, your Facebooks tend to agree) that you should be using camelCase for your variable names.
I realize this may sound nit-picky or like small potatoes to those of used to working as solo devs or on smaller teams, but as organizations scales up consistency and parallelism in your code is vital to making sure both that your code and data have good interoperability, as well as ensuring devs can be moved around without losing significant chunks of time simply to reteach themselves style.
Now, you can just choose one of those naming schemes to be consistent across the frontend and backend, and outright ignore one of the style standards.
Because now your project asking one set of your developers to context-switch specifically for this application. It also makes your code harder to share (unless you adopt this convention-breaking in your extended cinematic code universe). Youâve also probably killed a big rule in your linter, which you now have to customize in all implementations.
OR, we can just use DTOs.
Now, I donât have a generic preference whether the DTO is implemented on the front- or the back-end â that determination has more to do with your architecture and organizational structure than anything else.
Who owns the contracts in your backend/frontend exchange is probably going to be the biggest determiner - whichever side controls it, the other is probably writing the DTO. Though if youâre consuming an external data source, youâre going to be writing that DTO on the frontend.
Where possible, I prefer to send the cleanest, least amount of data required from my backend, so for our first example weâll start there. Because weâre writing the DTO in the backend, the data we send needs to conform to the schema the frontend expects - in this instance, Typescriptâs camel case.
Backend
class UserDTO
{
public function __construct(
public int $userId,
public string $fullName,
public string $emailAddress,
public string $avatarImageUrl
) {}
}
That was easy, right? We just create a data object that uses the naming conventions weâre adopting for sharing data. But of course, we have to get our User model into the DTO. This brings me to the second aspect of DTOs, the secret sauce - the translators.
Translators
function user_to_user_dto(User $user): UserDTO
{
return new UserDTO(
$user->id,
$user->full_name,
$user->email_address,
$user->avatar_url
);
}
Very simply, a translator is the function (and it should be no more than one function per level of DTO) that takes your original, nonstandard data and jams it into the DTO format.
Translators get called (and DTOs are created) at points of ingress and egress. Whether thatâs internal or external, the point at which a data exchange is made is when a translator is run and a DTO appears â which side of the exchange is up to your implementation.
You may also, as the next example shows, just want to include the translator as part of the DTO.
Using a static create method allows us to keep everything nice and contained, with a single call to the class.
class UserDTO
{
public function __construct(
public int $userId,
public string $fullName,
public string $emailAddress,
public string $avatarImageUrl
) {}
public static function from_user(User $user): UserDTO
{
return new self(
$user->id,
$user->full_name,
$user->email_address,
$user->avatar_url
);
}
}
$userDto = UserDTO::from_user($user);
I should note weâre using extremely simplistic base models in these examples. Often, something as essential as the user model is going to have a number of different methods and properties that should never get exposed to the frontend.
While you could do all of this through customizing the serialization method for your object. I would consider that to be a distinction in implementation rather than strategy.
An additional benefit of going the separate DTO route is you now have an explicitly defined model for what the frontend should expect. Now, your FE/BE contract testing can use the definition rather than exposing or digging out the results of your serialization method.
So thatâs a basic backend DTO - great for when you control the data thatâs being exposed to one or potentially multiple clients, using a different data schema.
Please bear with me - I know this probably seems simplistic, but weâre about to get into the really useful stuff. We gotta lay the groundwork first.
Frontend
Letâs back up and talk about another case - when you donât control the backend. Now, we need to write the DTO on the frontend.
First we have our original frontend user model.
class User {
userId: number;
fullName: string;
emailAddress: string;
avatarImageUrl: string;
load: (userId: number) => {/* load from DB */};
save: () => {/* persist */};
}
Here is the data we get from the backend, which I classify as a Response, for organizational purposes. This is to differentiate it from a Payload, which data you send to the API (which weâll get into those later).
When we translate the response, we can change the names of the parameters before they ever enter the frontend system. This allows us to maintain our metaphorical coherence within the application, and shield our frontend developers from old/bad/outdated/legacy code on the backend.
Another nice thing about using DTOs in the frontend, regardless of where they come from, is they provide us with a narrow data object we can use to pass to other areas of the application that donât need to care about the methods of our user object.
DTOs work great in these cases because they allow you to remove the possibility of other modules causing unintended consequences.
Notice that while the User object has load and save methods, our DTO just has the properties. Any modules we pass our data object are literally incapable of propagating manipulations they might make, inadvertently or otherwise. Canât make a save call if the object doesnât have a save method.
Use-case 2: Metaphorically incompatible systems
For our second use-case, letâs talk real-world implementation. In this scenario, we want to join up two systems that, metaphorically, do not understand one another.
Magazine publisher
Has custom backend system (magazines)
Wants to explore new segment (books)
Doesnât want to build a whole new system
I worked with a client, letâs say theyâre a magazine publisher. Magazines are a dying art, you understand, so they want to test the waters of publishing books.
But you canât just build a whole new app and infrastructure for an untested new business model. Their custom backend system was set up to store data for magazines, but they wanted to explore the world of novels. I was asked them build out that Minimum Viable Product.
This is the structure of the data expected by both the existing front- and back-ends. Because everythingâs one word, we donât even need to worry about incompatible casing.
Naive implementation
This new product requires performing a complete overhaul of the metaphor.
But we are necessarily limited by the backend structure as to how we can persist data.
If we just try to use the existing system as-is, but change the name of the interfaces, itâs going to present a huge mental overhead challenge for everyone in the product stack.
As a developer, you have to remember how all of these structures map together. Each chapter needs to have an author, because thatâs the only place we have to store that data. Every book needs to have a month, and a number. But no authors - only chapters have authors.
So we could just use the data structures of the backend and remember what everything maps to. But thatâs just asking for trouble down the road, especially when it comes time to onboard new developers. Now, instead of them just learning the system theyâre working on, they essentially have to learn the old system as well.
Plus, if (as is certainly the goal) the transition is successful, now their frontend is written in the wrong metaphor, because itâs the wrong domain entirely. When the new backend gets written, weâre going to have to the exact same problem in the opposite direction.
I do want to take a moment to address what is probably obvious â yes, the correct decision would be to build out a small backend that can handle this, but I trust youâll all believe me when I say that sometimes decisions get made for reasons other than âwhat makes the most sense for the applicationâs health or development teamâs morale.â
And while you might think that find-and-replace (or IDE-assisted refactoring) will allow you to skirt this issue, please trust me that youâre going to catch 80-90% of cases and spend twice as much time fixing the rest as it would have to write the DTOs in the first place.
Plus, as in this case, your hierarchies donât always match up properly.
What we ended up building was a DTO-based structure that allowed us to keep metaphorical coherence with books but still use the magazine schema.
Proper implementation
Youâll notice that while our DTO uses the same basic structures (Author, Parts of Work [chapter or article], Work as a Whole [book or magazine]), our hierarchies diverge. Whereas Books have one author, Magazines have none; only Articles do.
The author object is identical from response to DTO.
Youâll also notice we completely ignore properties we donât care about in our system, like IssueNo.
How do we do this? Translators!
Translating the response
We pass the MagazineResponse in to the BookDTO translator, which then calls the Chapter and Author DTO translators as necessary.
This is also the first time weâre using one of the really neat features of translators, which is the application of logic. Our first use is really basic, just checking if the Articles response is empty so we donât try to run our translator against null. This is especially useful if your backend has optional properties, as using logic will be necessary to properly model your data.
But logic can also be used to (wait for it) transform your data when we need to.
Remember, in the magazine metaphor, articles have authors but magazine issues donât. So when weâre storing book data, weâre going to use their schema by grabbing the author of the first article, if it exists, and assign it as the bookâs author. Then, our chapters ignore the author entirely, because itâs not relevant in our domain of fiction books with a single author.
Because the author response is the same as the DTO, we donât need a translation function. But we do have proper typing so that if either of them changes in the future, it should throw an error and weâll know we have to go back and add a translation function.
The payload
Of course, this doesnât do us any good unless we can persist the data to our backend. Thatâs where our payload translators come in - think of Payloads as DTOs for the anything external to the application.
For simplicityâs sake weâll assume our payload structure is the same as our response structure. In the real world, youâd likely have some differences, but even if you donât itâs important to keep them as separate types. No one wants to prematurely optimize, but keeping the response and payload types separate means a change to one of them will throw a type error if theyâre no longer parallel, which you might not notice with a single type.
Our translators can be flexible (because weâre the ones writing them), allowing us to pass objects up and down the stack as needed in order to supply the proper data.
Note that weâre just applying the author to every article, because a) thereâs no harm in doing so, and b) the system like expects there be an author associated with every article, so we provide one. When we pull it into the frontend, though, we only care about the first article.
We also make sure to fill out the rest of the data structure we donât care about so the backend accepts our request. There may be actual checks on those numbers, so we might have to use more realistic data, but since we donât use it in our process, itâs just a question of specific implementation.
So, through the application of ingress and egress translators, we can successfully keep our metaphorical coherence on our frontend while persisting data properly to a backend not configured to the task. All while maintaining type safety. Thatâs pretty cool.
The single biggest thing I want to impart from this is the flexibility that DTOs offer us.
Use-case 3: Using the smallest amount of data required
When working with legacy systems, I often run into a mismatch of what the frontend expects and what the backend provides; typically, this results in the frontend being flooded an overabundance of data.
These huge data objects wind up getting passed around and used on the frontend because, for example, thatâs what represents the user, even if you only need a few properties for any given use-case.
Or, conversely, we have the tiny amount of data we want to change, but the interface is set up expecting the entirety of the gigantic user object. So we wind up creating a big blob of nonsense data, complete with a bunch of null properties and only the specific ones we need filled in. Itâs cumbersome and, worse, has to be maintained so that whenever any changes to the user model need to be propagated to your garbage ball, even if those changes donât touch the data points you care about.
One way to eliminate the data blob is to use DTOs to narrowly define which data points a component or class needs in order to function. This is what I call minimizing touchpoints, referring to places in the codebase that need to be modified when the data structure changes.
In this scenario, weâre building a basic app and we want to display an avatar for a user. We need their name, a picture and a color for their frame.
What we have is their user object, which contains a profile and groups and sites the user is assigned to, in addition to their address and other various info.
Quite obviously, this is a lot more data than we really need - all we care about are three data points.
This Avatar class works, technically speaking, but if Iâm creating a fake user (say itâs a dating app and we need to make it look like more people are using than actually is the case), I now have to create a bunch of noise to accomplish my goal.
Even if Iâm calling from a completely separate database and class, in order to instantiate an avatar I still need to provide the stubs for the User class.
By now, the code should look pretty familiar to you. This pattern is really not that difficult once you start to use it - and, Iâll wager, a lot of you are already using it, just not overtly or systematically. The bonus to doing it in a thorough fashion is that refactoring becomes much easier - if the frontend or the backend changes, we have a single point from where the changes emanate, making them much easier to keep track of.
Flexibility
But thereâs also flexibility. I got some pushback from implementing the AvatarDTO; after all, there were a bunch of cases already extant where people were passing the user profile, and they didnât want to go find them. As much as I love clean data, I am a consultant; to assuage them, I modified the code so as to not require extra work (at least, at this juncture).
class Avatar
{
private avatarData: AvatarDTO;
constructor(user: User|null, dto?: AvatarDTO)
{
if (user) {
this.avatarData = translateUserToAvatarDTO(user);
} else if (dto) {
this.avatarData = dto;
}
}
}
new Avatar(george);
new Avatar(null, {
name: 'Lucy Evans',
imageUrl: '/assets/uploads/users/le-319391.jpg',
hexColor: '#fc0006'
});
Instead of requiring the AvatarDTO, we still accept the user as the default argument, but you can also pass it null. That way I can pass my avatar DTO where I want to use it, but we take care of the conversion for them where the existing user data is passed in.
Use-case 4: Security
The last use-case I want to talk about is security. I assume some to most of you already get where Iâm going with this, but DTOs can provide you with a rock-solid way to ensure youâre only sending data youâre intending to.
Somewhat in the news this month is the Spoutible API breach; if youâve never heard of it, Iâm not surprised. Spoutible a Twitter competitor, notable mostly for its appalling approach to API security.
I do encourage all of you to look this article up on troyhunt.com, as the specifics of what they were exposing are literally unbelievable.
But for the sake of not spoiling all the good parts, Iâll just show you the first horrifying section of data. For authenticated users, the API appeared to be returning the entire user model - mundane stuff like id, username, a short user description, but also the password hash, verified phone number and gender.
Now, I hope it goes without saying that you should never be sending anything related to user passwords, whether plaintext or hash, from the server to the client. Itâs very apparent when Spoutible was building its API that they didnât consider what data was being returned for requests, merely that the data needed to do whatever task was required. So they were just returning the whole model.
If only theyâd used DTOs! Iâm not going to dig into the nitty-gritty of what it should have looked like, but I think you can imagine a much more secure response that could have been sent back to the client.
Summing up
If you get in the practice of building DTOs, itâs much easier to keep control of precisely what data is being sent. DTOs not only help keep things uniform and unsurprising on the frontend, they can also help you avoid nasty backend surprises as well.
To sum up our little chat today: DTOs are a great pattern to make sure youâre maintaining structured data as it passes between endpoints.
Different components only have to worry about exactly the data they need, which helps both decrease unintended consequences and decrease the amount of touchpoints in your code you need to deal with when your data structure changes. This, in turn, will help you maintain modular independence for your own code.
It also allows you to confidently write your frontend code in a metaphorically coherent fashion, making it easier to communicate and reason about.
And, you only need to conform your data structure to the backendâs requirements at the points of ingress and egress - Leaving you free to only concern your frontend code with your frontend requirements. You donât have to be limited by the rigid confines backendâs data schema.
Finally, the regular use of DTOs can help put you in the mindset of vigilance in regard to what data youâre passing between services, without needing to worry that youâre exposing sensitive data due to the careless conjoining of model to API controller.
Honestly, I thought we were past this as an industry? But my experience at Developer Week 2024 showed me there’s still a long way to go to overcoming sexism in tech.
And it came from the source I least expected; literally people who were at the conference trying to convince others to buy their product. People for whom connecting and educating is literally their job.
Time and again, both I (an engineer) and my nonbinary wife (a business analyst, at a different organization) found that the majority of the masculine-presenting folks at the booths on the expo floor were dismissive and disinterested, and usually patronizing.
It was especially ironic given one of the predominant themes of the conference was developer experience, and focusing on developer productivity. One of the key tenets of dx is listening to what developers have to say. These orgs failed. Horribly.
My wife even got asked when “your husband” would be stopping by.
I had thought it would go without saying, but female- and androgynous-presenting folk are both decision-makers in their own right as well as people with influence in companies large and small.
To organizations: Continuing to hire people who make sexist (and, frankly, stupid) judgments about who is worth talking to and what you should be conversing with them about is not only insulting, it’s bad business. Founders: If you’re the ones at your booth, educate yourselves. Fast.
I can tell you there are at least three different vendors who were providing services in areas we have professional needs around who absolutely will not get any consideration, simply because we don’t want to deal with people like that. I don’t assume the whole organization holds the same opinions as their representative; however, I can tell for a fact that such views are not disqualifying by that organization, and so I have no interest in dealing with them further.
Rather than call out the shitty orgs, I instead want to call out the orgs whose reps were engaging, knowledgable and overall pleasant to deal with. Both because those who do it right should be celebrated, and because in an attention economy any given (even negative) is unfortunately an overall net positive.
The guys at Convex were great, answering all my questions about their seemingly very solid and robust Typescript backend platform.
The folks at Umbraco gave great conference talks, plied attendees with cookies and talked with us at length about their platform and how we might use it. Even though I dislike dotNet, we are very interested in looking at them for our next CMS project.
The developer advocates at Couchbase were lovely and engaging, even if I disagree with Couchbase’s overall stance on their ability to implement ACID.
The folks at the Incident.io booth were wonderful, and a good template for orgs trying to sell services: They brought along an engineering manager who uses their services, and could speak specifically to how to use them.
I want to give a shout-out to those folks, and to exhort organizations to do better in training those you put out as the voice of your brand. This is not hard. And it only benefits you to do so.
As part of my plan to spend more time bikeshedding building out my web presence than actually creating content, I wanted to build an iOS app that allowed me to share short snippets of text or photos to my blog. I’ve also always wanted to understand Swift generally and building an iOS app specifically, so it seemed like a nice little rabbit hole.
With the help of Swift UI Apprentice, getting a basic app that posted a content, headline and tags to my API wasn’t super difficult (at least, it works in the simulator. I’m not putting it on my phone until it’s more useful). I figured adding a share extension would be just as simple, with the real difficulty coming when it came time to posting the image to the server.
Boy was I wrong.
Apple’s documentation on Share Extensions (as I think they’re called? But honestly it’s hard to tell) is laughably bad, almost entirely referring to sharing things out from your app, and even the correct shitty docs haven’t been updated in it looks like 4+ years.
There are some useful posts out there, but most/all of them assume you’re using UIKit. Since I don’t trust Apple not to deprecate a framework they’ve clearly been dying to phase out for years, I wanted to stick to SwiftUI as much as I could. Plus, I don’t reallllly want to learn two paradigms to do the same thing. I have enough different references to keep in my head switching between languages.
Thank god for Oluwadamisi Pikuda, writing on Medium. His post is an excellent place to get a good grasp on the subject, and I highly suggest visiting it if you’re stuck. However, since Medium is a semi-paywalled content garden, I’m going to provide a cleanroom implementation here in case you cannot access it.
It’s important to note that the extension you’re creating is, from a storage and code perspective, a separate app. To the point that technically I think you could just publish a Share Extension, though I doubt Apple would allow it. That means if you want to share storage between your extension and your primary app, you’ll need to create an App Group to share containers. If you want to share code, you’ll need to create an embedded framework.
But once you have all that set up, you need to actually write the extension. Note that for this example we’re only going to be dealing with text shared from another app, with a UI so you can modify it. You’ll see where you can make modifications to work with other types.
You start by creating a new target (File -> New -> Target, then in the modal “Share Extension”).
A screenshot of the XCode menu selecting “File”, then “New,” then “Target…“The Xcode new target modal popover, with “Share Extension” selectedOnce you fill out the info, this will create a new directory with a UIKit Storyboard file (MainInterface), ViewController and plist. We’re not gonna use hardly any of this. Delete the Storyboard file. Then change your ViewController to use the UIViewController class. This is where we’ll define what the user sees when content is shared. The plist is where we define what can be passed to our share extension.
There are only two functions we’re concerned about in the ViewController â viewDidLoad() and close(). Close is going to be what closes the extension while viewDidLoad inits our code when the view is loaded into memory.
For close(), we just find the extensionContext and complete the request, which removes the view from memory.
viewDidLoad(), however, has to do more work. We call the super class function first, then we need to make sure we have access to the items that are been shared to us.
Since again we’re only working with text in this case, we need to verify the items are the correct type (in this case, UTType.plaintext).
importUniformTypeIdentifiersimportSwiftUIclassShareViewController: UIViewController {
overridefuncviewDidLoad() {
...
let textDataType = UTType.plainText.identifier
if itemProvider.hasItemConformingToTypeIdentifier(textDataType) {
// Load the item from itemProvider
itemProvider.loadItem(forTypeIdentifier: textDataType , options: nil) { (providedText, error) inif error !=nil {
self.close()
return
}
iflet text = providedText as? String {
// this is where we load our view
} else {
self.close()
return
}
}
}
Next, let’s define our view! Create a new file, ShareViewExtension.swift. We are just editing text in here, so it’s pretty darn simple. We just need to make sure we add a close() function that calls NotificationCenter so we can close our extension from the controller.
importSwiftUIstructShareExtensionView: View {
@State privatevar text: String
init(text: String) {
self.text = text
}
var body: some View {
NavigationStack{
VStack(spacing: 20){
Text("Text")
TextField("Text", text: $text, axis: .vertical)
.lineLimit(3...6)
.textFieldStyle(.roundedBorder)
Button {
// TODO: Something with the textself.close()
} label: {
Text("Post")
.frame(maxWidth: .infinity)
}
.buttonStyle(.borderedProminent)
Spacer()
}
.padding()
.navigationTitle("Share Extension")
.toolbar {
Button("Cancel") {
self.close()
}
}
}
}
// so we can close the whole extensionfuncclose() {
NotificationCenter.default.post(name: NSNotification.Name("close"), object: nil)
}
}
Back in our view controller, we import our SwiftUI view.
The last thing you need to do is register that your extension can handle Text. In your info.plist, you’ll want to add an NSExtensionAttributes dictionary with an NSExtensionActivtionSupportsText boolean set to true.
A screenshot of a plist file having accomplished the instructions in the post.You should be able to use this code as a foundation to accept different inputs and do different things. It’s a jumping-off point! Hope it helps.
Not wanting to deal with security/passwords and allowing third-party logins has given way to complacency, or outright laziness. Here are some troubling patterns I’ve noticed trying to de-google my primary domain.
Google does not really keep track of where your account has been used. Yes, there’s an entry in security, but the titles are entirely self-reported and are often useless (wtf is Atlas API production?). They also allow for things like “auth0” to be set as the responsible entity, so I have no idea what these accounts are even for.
This would not be a problem if systems were responsible with the user identity and used your Google account as a signifier. However, many apps (thus far, Cloudinary and Figma are my biggest headaches) treat the Google account as the owner of the account, meaning if I lose access to that Google account (like now, when I’m migrating the email off of Google), I"m SOL.
The RESPONSIBLE way to do this is allow me to disconnect the Google sign on and require a password reset. This is just lazy.
Because I use this like three times a year and always have to look it up: When you want to merge folders of the same name on a Mac (e.g., two identically named folders where you want the contents of Folder 1 and Folder 2 to be in Folder 2), hold down the option key and drag Folder 1 into the container directory of Folder 2. You should see the option to merge.
Note that this is a copy merge, not a move merge, so you’ll need to delete the source files when you’re done. It also appears to handle recursion properly (so if you have nested folders named the same, it’ll give you the same option).
Did I almost look up a whole app to do this? Yes, I did. Is it stupid this isn’t one of the default options when you click and drag? Yes, it is.
This post brought to you by Google Drive’s decision to chunk download archives separately (e.g., it gives me six self-contained zips rather than 6 zip parts). Which is great for failure cases but awful on success.
Disclaimer: I am not receiving any affiliate marketing for this post, either because the services don’t offer it or they do and I’m too lazy to sign up. This is just stuff I use daily that I make sure all my new computers get set up with.
My current list of must-have Mac apps, which are free unless otherwise noted. There are other apps I use for various purposes, but these are the ones that absolutely get installed on every machine.
1Password
Password manager, OTP authenticator, Passkey holder and confidential storage. My preferred pick, though there are plenty of other options. ($36/year)
Bear
Markdown editor. I write all my notes in Bear, and sync ‘em across all my devices. It’s a pleasant editor with tagging. I am not a zettelkasten person and never will be, but tagging gets me what I need. ($30/year)
Contrast
Simple color picker that also does contrast calculations to make sure you’re meeting accessibility minimums (you can pick both foreground and background). My only complaint is it doesn’t automatically copy the color to the clipboard when you pick it (or at least the option to toggle same).
Dato
Calendar app that lives in your menubar, using your regular system accounts. Menubar calendar is a big thing for me (RIP Fantastical after their ridiculous price increase), but the low-key star of the show is the “full-screen notification.” Basically, I have it set up so that 1 minute before every virtual meeting I get a full-screen takeover that tells me the meeting is Happening. No more “notification 5 minutes before, try to do something else real quick then look up and realize 9 minutes have passed.” ESSENTIAL. ($10)
iTerm2
I’ve always been fond of Quake-style terminals, so much so that unless I’m in an IDE it’s all I’ll use. iTerm lets a) remove it from the Dock and App Switcher, b) force it to load only via a global hotkey, and c) animate up from whatever side of the screen you choose to show the terminal. A+. I tried WarpAI for a while, and while I liked the autosuggestions, the convenience of an always-available terminal without cluttering the Dock or App Switcher is, apparently, a deal-breaker for me.
Karabiner Elements
Specifically for my laptop when I’m running without my external keyboard. I map caps lock to escape (to mimic my regular keyboards), and then esc is mapped to hyper (for all my global shortcuts for Raycast, 1Password, etc.).
NextDNS
Secure private DNS resolution. I use it on all my devices to manage my homelab DNS, as well as set up DNS-based ad-blocking. The DNS can have issues sometimes, especially in conjunction with VPNs (though I suspect it’s more an Apple problem, as all the options I’ve tried get flaky at points for no discernible reason), but overall it’s rock-solid. ($20/year)
NoTunes
Prevents iTunes or Apple Music from launching. Like, when your AirPods switch to the wrong computer and you just thought the music stopped so you tapped them to start and all of a sudden Apple Music pops up? No more! You can also set a preferred default music app instead.
OMZ (oh-my-zsh)
It just makes the command line a little easier and more pleasing to use. Yes, you can absolutely script all this manually, but the point is I don’t want to.
Pearcleaner
The Mac app uninstaller you never knew you needed. I used to swear by AppCleaner, but I’m not sure it’s been updated in years.
Raycast
Launcher with some automation and scripting capabilities. Much better than spotlight, but not worth the pro features unless you’re wayyyy into AI. Free version is perfectly cromulent. Alfred is a worthy competitor, but they haven’t updated the UI in years and it just feels old/slower. Plus the extensions are harder to use.
Vivaldi
I’ve gone back to Safari as my daily driver, but Vivaldi is my browser of choice when I’m testing in Chromium (and doing web dev in general. I love Safari, but the inspector sucks out loud). I want to like Orion (it has side tabs!). It keeps almost pulling me back in but there are so many crashes and incompatible sites I always have to give up within a week. So Safari for browsing, Vivaldi for development.
Alina Gingertail’s name sounds like a D&D character, but her music sounds like every Gaelic-ish song I hear in movies where, when I look it up, find out the artist has released exactly one song in that style ever.
Also, HOW does YouTube Music still not have embeds???