Kait About

Text Posts

An image promoting my improv for developers workshop at Beer City CodeI'm headed back to the Midwest to do some speakerizing again in August 2024.

Beer City Code 24 is in Grand Rapids, MI, on Aug 2-3. I'm super excited to present a workshop, Improv for Developers, which is where we'll do actual improv training and then talk about how those skills translate to software development. It's 6 hours (!!), but it should be a lot of fun!

I'll also talk about greenfield development: specifically, that it doesn't really exist anymore. There are always preexisting considerations you're going to have to take into account, so I'll give some hard-won tips on sussing them out.

DevUp will be held in St. Louis on Aug. 14-16. I'll be talking about greenfields again, as well as reasons scrum-based development tends to fail, and how we can measure developer productivity.

Hope to see you this summer!

YES, AND you also have to write documentation or no one will know what the hell you were thinking when you wrote it.

Though I am no great fan of AI or its massively over-hyped potential, I also do not think it's useless. As Molly White put it:

When I boil it down, I find my feelings about AI are actually pretty similar to my feelings about blockchains: they do a poor job of much of what people try to do with them, they can't do the things their creators claim they one day might, and many of the things they are well suited to do may not be altogether that beneficial.

I wholeheartedly agree with those claims, and don't want to get into the specifics of them too much. Instead, I wanted to think out loud/write about why there's such a wide range of expectations and opinions on the current and future states of AI.

To get the easy one out of the way: Many of the most effusive AI hype people are in fit for the money. They're raising venture capital by saying AI, they're trying to get brought in as consultants on AI, or they're trying to sell their AI product to businesses and consumers. I don't think that's a particularly new phenomenon when it comes to new technology, though perhaps there is some novelty in how many different ways people are attempting to get their slice of the cake (companies cooking up AI models, apps trying to sell AI generation to consumers, hardware and cloud providers selling the compute necessary to do all of the above, etc.).

But once we take pure profit motive out of the way, there are I think two key areas of difference in people who believe in AI wholeheartedly and those who are neutral to critical.

The first is software development experience. Those who understand what it actually means when people say "AI is thinking" tend to have an overall more pessimistic view of the pinnacle of current AI generation strategies. In a nutshell, all of the current generative models try to ingest as much content of whatever thing they're going to be asked to output. Then, they are given a "prompt," and they are (in simplistic terms) trying to piece together an image/string of words/video that looks most likely based on what came for.

This is why these models "hallucinate" - they don't "know" anything specifically in the way you know that Washington, DC is the capital of the United States. It just knows that when a sentence starts "The capital of the United States is" it usually ends with the words "Washington, DC."

And that can be useful in some instances! This is why AI does very well on low-level coding tasks - a lot of the basics of programming is pretty repetitive and pattern-based, so an expert pattern-matcher can do fairly well at guessing the most likely outcome. But it's also why AI developer assistants produce stupid mistakes, because it doesn't "understand" the syntax or the language or even the problem statement as a fundamental unit of knowledge. It simply reads a string of text and tries to figure out what would most likely come next.

The other thing you learn from experience are edge cases, and specifically what doesn't work. This type of knowledge tends to accumulate only through having worked on a product before, and understanding how different pieces come together (or don't). AI lacks this awareness of context, focusing only what immediately surrounds the section it's working on.

But the other primary differentiator is for the layperson, who can best be understood as a consumer and it can be condensed to a single word: Taste.

I'm reminded of a quote from Ira Glass I heard on some podcast:

... all of us who do creative work … we get into it because we have good taste. But it’s like there’s a gap, that for the first couple years that you’re making stuff, what you’re making isn’t so good, OK? It’s not that great. It’s really not that great. It’s trying to be good, it has ambition to be good, but it’s not quite that good. But your taste — the thing that got you into the game — your taste is still killer, and your taste is good enough that you can tell that what you’re making is kind of a disappointment to you ...

I think this is true, and I think it's the biggest differentiator between people who think what AI is capable of right now is perfectly fine and those that think it'll all wind up being a waste of time. People who can't or are unwilling create text/images/videos on their own think that AI is a great shortcut. This is either because the quality of what the AI can produce is better than what they can do unassisted, or they don't have the taste to see the difference in the first place.

I don't know that I think there's a way to bridge that gap any more than there is to explain to people who think that criticism of any artform is "unfair" or that "well, could you do any better?" is a valid counterpoint to cultural criticism. There are simply those people whose taste is better than that what can be created only through an amalgamation of data used to train a model, and those who think that a simulacrum of art is indistinguishable (or better) than the real thing.

It's amazing how short my attention span for new fads is anymore. I don't want to blame Trump for this one, but my eagerness to ignore any news story he was involved in definitely accelerated the decline of my willingness to cognitively engage with the topic du jour significantly.

#AI

The Game is a mind game in which the objective is to avoid thinking about The Game itself. Thinking about The Game constitutes a loss, which must be announced each time it occurs.

The programming version of The Game has the same rules, but you lose if you think about David Heinemeier Hansson (aka DHH).

And no, I'm not linking to why I lost today.

Broke a four-month winning streak, dangit.

When I gave my talk, "That's not real scrum: Measuring and managing productivity for development teams" at MiTechCon 2024 in Pontiac, MI, there were a number of great questions, both in-person and from the app. I collected them here, as a supplement to the accessible version of the talk.

Q: What are best practices on implementing agile concepts for enterprise technology teams that are not app dev (e.g., DevOps, Cloud, DBA, etc.)?

A brief summary: 1) Define your client (often not the software's end-user; could be another internal group), and 2) find the way to release iteratively to provide them value. This often requires overcoming entrenched models of request/delivery — similar to how development tends to be viewed as a "service provider" who gets handed a list of features to develop, I would imagine a lot of teams trying to make that transition are viewed as providers and expected to just do what they're told. Working back the request cycle with the appropriate "client" to figure out how to deliver incremental/iterative value is how you can deliver successfully with agile!

Q: How do I convince a client who wants stuff at a certain time to trust the agile process?

There's no inherent conflict between a fixed-cost SOW and scrum process. The tension that tends to exist in these situations is not the cost structure, but rather what is promised to be delivered and when. Problems ensue when you're delivering a fixed set of requirements by a certain date - you can certainly do that work in a somewhat agile fashion and gain some of the benefits, but you're ultimately setting yourself up to experience tension as you get feedback through iterations that might ultimately diverge from the original requirements.

This is the "change order hell" that often comes with client work — agile is by definition flexible in its results, so if we try to prescribe them ahead of time, we're setting ourselves up for headaches. That's not to say it's not worth doing (the process may be beneficial to the people doing the work if the waterfall outcome is prescribed), but note (to yourself and the client) that a waterfall outcome (fixed set of features at a fixed date) brings with it waterfall risk, even if you do the work in an agile fashion.

It is unfortunately very often difficult, but this is part of the "organizational shift" I spoke about. If the sales team does not sell based on agile output, it's very difficult to perform proper agile development in order the reap all its benefits.

Q: We're using Agile well; How do we dissuade skip-level leadership from demanding waterfall delivery dates using agile processes?

This is very similar to the previous answer, with the caveat that it's not on you to convince a level of leadership beyond your own manager of anything. You can and should be providing your manager with the information and advice mentioned in the above answer, but ultimately that convincing has to come from the people they manage, not levels removed. Scrum (and agile, generally) requires buy-in up and down the corporate stack.

Q: What are best practices for ownership of the product backlog?

Best practices are contextual! Ownership of the product backlog is such a tricky question.

In general, I think product backlogs tend to have too many items. I am very much a fan of expiring backlog items — if they haven't been worked on in 30 days (two-ish sprints), they go away (system-enforced!) until the problem they address comes up again.

The product owner is accountable for the priority and what's included or removed from the product backlog.

I kind of think teams should have two separate stores of stories: One is the backlog, specific ideas or stories that are going to be worked on (as above) in the next sprint or two), which is the product owner's responsibility. The second is a brainstorming pool — preferably not even in the same system (because it is NOT the case that you should be just be plucking from the pool and plopping on the backlog). Rather, these are just broad ideas or needs we want to capture so we don't lose sight of them, but from them, specific problems are identified and stories written. This should be curated by the product owner, but allow for easier/broader access to add to it.

Q: Is it ever recommended to have the Scrum Master also be Product Manager?

(I am assuming for the sake of this question that Product Manager = Product Owner. If I am mistaken, apologies!)

I would generally not recommend the product owner and the scrum master be the same person, though I am aware by necessity it sometimes happens. It takes a lot of varied skills to do both of those jobs, and in most cases if it happens successfully it's because there's a separate system in place to compensate in one or both areas. (e.g., there's a separate engineering manager who's picking up a lot of what would generally be SM work, or the product owner is in name only because someone else/external is doing the requirements- gathering/customer interaction). Both positions require a TON of work to perform properly - direct customer interaction, focus groups, metrics analysis and stakeholder interaction are just some of a PM's duties, while the SM should be devoted to the dev team to make sure any blocks get cleared and work continues apace.

But even more than time, there's a philosophical divide that would be difficult to resolve in one person. The SM should be looking at things from a perspective of what's possible now, whereas the PM should have a longer-term view of what should be happening soon. Rare is the individual who can hold both of those things in their head with equal weight; usually one is going to be prioritized over the other, to the detriment of the process overall.

Q: What is the best (highest paying) Scrum certification?

If your pay is directly correlated with the specific certification you have, you are very likely working for the company that provides it. Specific certifications may be more favored in certain industries or verticals, but that's no more than generally indicative of pay than the difference between any two different companies.

More broadly, I view certifications as proof of knowledge that should be useful and transferable regardless of specific situation. Much like Agile, delivering value (and a track record of doing same) is the best route to long-term career success (and hence more money).

Q: Can you use an agile scrum approach without a central staffing resource database?

Yes, with a but! You do not need a formal method of tracking your resourcing, but the scrum master (at the team level) needs to know their resourcing (in terms of how many developers are going to be available to work that sprint) in order to properly plan the sprint. If someone is taking a vacation, you need to either a) pull in fewer stories, b) increase your sprint length, or c) pull in additional resources (if availble to you).

Even at the story level, this matters. If you have a backend ticket and your one BE developer is out, you're not gonna want to put that in the sprint. But it doesn't need to be a formal, centralized database. It could be as simple as everyone noting their PTO during sprint planning.

What's always both heartening and a little bit sad to me is how much the scrum teams want to produce good products, provide value, and it's over-management that holds them back from doing so.

I keep seeing the iPhone’s popularity and sales numbers thrown around as proto-defenses against allegations of flexing monopolistic power in one category to dominate others.

“Popularity” is an argument IN FAVOR of the the government, not a defense. The argument is that the iPhone is very popular and sold a lot, and Apple is using that position of strength to stifle innovation and hamper the growth of competitors in related categories (payments, apps, music services, etc.).

You can disagree with the suit all you want, just know what you’re arguing for and against.

I think this ultimately traces back to a weird implicit belief I’ve been noticing lately, which is like a Constitutional (or god-given) right to a certain business model. If you listen in the news, you can hear it being implied all the time.

OK, we need to talk about OREOs ... and how they impacted my view of product iteration.

(Sometimes I hate being a software developer.)

A package of Space Dunk oreos

I'm sure you've seen the Cambrian explosion of Oreo flavors, the outer limits of which were brought home to me with Space Dunks - combining Oreos with Pop Rocks. (And yes, your mouth does fizz after eating them.)

Putting aside the wisdom or sanity of whoever dreamt up the idea in the first place, it's clear that Oreo is innovating on its tried-and-true concept – but doing so without killing off its premier product. There is certainly some cannibalization of sales going on, but ultimately it doesn't matter to Nabisco because a) regular Oreos are popular enough that you'll never kill them off completely, and b) halo effect (your mom might really love PB oreos but your kid hates them, so you now you buy two bags instead of one!)

In software, we're taught that the innovator's dilemma tends to occur when you're unwilling to sacrifice your big moneymaker in favor of something new, and someone else without that baggage comes along eats your cookies/lunch.

Why can't you do both?

There are a number of different strategies you could employ, from a backend-compatible but disparate frontend offering (maybe with fewer features at a cheaper cost, or radically new UX). What about a faux startup with a small team and resources who can iterate on new ideas until they find what the market wants?

But the basic idea remains the same: Keep working away at the product that's keeping you in the black, but don't exclude experimentation and trying new approaches from your toolkit. Worst-case scenario, you still have the old workhorse powering through. In most cases, you'll have some tepid-to-mild hits that diversify your revenue stream (and potentially eat at the profit margins of your competitors) and open new opportunities for growth.

And every once in a while you'll strike gold, with a brand-new product that people love and might even supplant your tried-and-true Ol' Faithful.

The trick then is to not stop the ride, and keep rolling that innovation payoff over into the next new idea.

Just maybe leave Pop Rocks out of it.

I had the Platonic ideal of peanut butter pies at my wife's graduate school graduation in Hershey, PA, like five years ago. (They were legit Reese's Peanut Butter Pies from Mr. Reese himself.) I've chased that high for years, but never found it again. The peanut butter pie Oreos were probably the closest I've gotten.

“[Random AI] defines ...” has already started to replace “Webster’s defines ...” as the worst lede for stories and presentations.

I let the AI interview in the playbill slide because the play was about AI, but otherwise, no bueno.

I have previously mentioned that I love NextDNS, but they do not make certain fundamental things, like managing your block and allow lists, very easy. Quite often I'll hit a URL that's blocked that I'd like to see - rather than use their app, I have to load their (completely desktop-oriented) website, navigate to the right tab, then add the URL in.

That's annoying.

Without writing an iOS app all of its own (which sounds like a lot of work), I wanted an easy to way to push URLs to the block or deny list. So I wrote an iOS Shortcut that works with a PHP script to send the appropriate messages.

You can find the shortcut here.

The PHP script can found at this Gist. You'll need to set the token, API key (API key can be found at the bottom of your NextDNS profile) and profile ID variables in the script. The token is what you'll use to secure your requests from your phone to the server.

I thought about offering a generic PHP server that required you to set everything in Shortcuts, but that's inherently insecure for everyone using it, so I decided against it. I think it would be possible to do this all in Shortcuts, but Shortcuts drives me nuts for anything remotely complex, and this does what I need it to.

It really is annoying how hard it is to manage a basic function. NextDNS also doesn't seem to have set their CORS headers properly for OPTIONS requests, which are required for browser-based interactions because of how they dictated the API token has to be sent.

OpenAI announced Sora, a new model for text-to-video, and it's ... fine? I guess? I mean, I know why they announced it - it's legitimately really cool you can type something in and a video vaguely approximating your description in really high resolution shows up.

I just don't think it's really all that useful in real-world contexts.

Don't get me wrong, I appreciate their candor in the "whoopsies" segments, but even in the show-off pieces some of the video is weird to just downright bad.

A screenshot of a video of a woman walking, where her thumb is approximately as long as all her other fingersHands are hard! I get it! But there's also quite literally a "bag lady" (a woman who appears to be carrying at least two gigantic purses), and (especially when the camera moves) the main character floats along the ground without actually walking pretty often.

Are these nitpicky things people aren't going to notice on first glance? Maybe. But remember the outrage around Ugly Sonic? People notice small (or large) discrepancies in their popular entertainment, and the brand suffers for it. To say nothing of advertisers! Imagine trying to market your brand-new (well, "new" in car definitions) car without an accurate model of said car in the ad. Or maybe you really want to buy the latest Danover.

An AI-generated commercial of a generic SUV with the word "Danover" as the brand.It seems like all the current AI output has a limit of "close-ish" for things, from self-driving to video to photos to even text generation. It all requires human editing, often significant for any work of reasonable size, to pull it out of the uncanny valley.

"But look how far they've gotten in such little time!" they cry. "Just wait!"

But nobody's managed to push past that last 10% in any domain. It always requires a human touch to get it "right."

Like the fake Land Rover commercial is interesting, except imagine the difficulty of getting it to match your new product (look and name) exactly. You're almost going to have to CGI it in after, at least parts, at which point you've lost much of the benefit.

Unfortunately, "close enough" is good enough for a lot of people who are lazy, cheap or don't care about quality. The software example I'd give is there probably aren't a lot of companies who'd be willing to pay for software consultant services who are just going to use AI instead, but plenty of those people who message you on LinkedIn willing to pay you $200 for a Facebook clone absolutely are going to pay Copilot $20 a month instead.

And yes, there will be those people (especially levels removed from the actual work) who will think they can replace their employees with chatbots, and it might even work for a little bit. But poorly designed systems always have failure points, and once you hit it you're going to wind up having to scrap the whole thing. A building with a bad foundation can't be fixed through patching.

I have a feeling it's the same in other industries. I do think workers will feel the hit, especially on lower-budget products already or where people see an opportunity to cut corners. I also think our standards as a society will be relaxed a little bit in a lot of areas, simply because the mean will regress

But in good news, I think this'll shake out in a few years where people realize AI isn't replacing everything any more than Web3 did, but AI will have more utility as a tool in the toolkit of professionals. It's just gonna take a bit to get there.

The funny thing is a lot of the uncanny stuff makes it look like the model was trained on CGI videos, which might be a corollary to the prophesied problem of AI training on AI outputs. The dalmatian looks and moves CGI af, and the train looks like a bad photoshop insert where they had a video of a train on flat ground and matted over the background with a picture.

How many Ryan Reynoldses do we as a moviegoing public need? I felt like the original had it more than covered, but with the Chrises three (Pratt, Hemsworth and Evans) and now Ryan Gosling, I feel like my cup overfloweth with meta-acting and fourth-wall-chewing.

To be fair, Pratt did it more but RR did it most.

Not wanting to deal with security/passwords and allowing third-party logins has given way to complacency, or outright laziness. Here are some troubling patterns I've noticed trying to de-google my primary domain.

1) Google does not really keep track of where your account has been used. Yes, there's an entry in security, but the titles are entirely self-reported and are often useless (wtf is Atlas API production?). They also allow for things like "auth0" to be set as the responsible entity, so I have no idea what these accounts are even for.

2) This would not be a problem if systems were responsible with the user identity and used your Google account as signifier. However, many apps (thus far, Cloudinary and Figma are my biggest headaches) treat the Google account as the owner of the account, meaning if I lose access to that Google account (like now, when I'm migrating the email off of Google), I"m SOL.

The RESPONSIBLE way to do this is allow me to disconnect the Google sign on and require a password reset. This is just lazy.

The best solution I've found is add a new account with an alt email address to the "team" account with admin ownership, but this is a hacky kludge, not a solution.

Because I use this like three times a year and always have to look it up: When you want to merge folders of the same name on a Mac (e.g., two identically named folders where you want the contents of Folder 1 and Folder 2 to be in Folder 2), hold down the option key and drag Folder 1 into the container directory of Folder 2. You should see the option to merge.

Note that this is a copy merge, not a move merge, so you'll need to delete the source files when you're done. It also appears to handle recursion properly (so if you have nested folders named the same, it'll give you the same option).

Did I almost look up a whole app to do this? Yes, I did. Is it stupid this isn't one of the default options when you click and drag? Yes, it is.

This post brought to you by Google Drive's decision to chunk download archives separately (e.g., it gives me six self-contained zips rather than 6 zip parts). Which is great for failure cases but awful on success.

Dislcaimer: I am not receiving any affiliate marketing for this post, either because the services don't offer it or they do and I'm too lazy to sign up. This is just stuff I use daily that I make sure all my new computers get set up with.

My current list of must-have Mac apps, which are free unless otherwise noted. There are other apps I use for various purposes, but these are the ones that absolutely get installed on every machine.

  • 1Password
    Password manager, OTP authenticator, Passkey holder and confidential storage. My preferred pick, though there are plenty of other options. ($36/year)

  • Bear
    Markdown editor. I write all my notes in Bear, and sync 'em across all my devices. It's a pleasant editor with tagging. I am not a zettelkasten person and never will be, but tagging gets me what I need. ($30/year)

  • Contrast
    Simple color picker that also does contrast calculations to make sure you're meeting accessibility minimums (you can pick both foreground and background). My only complaint is it doesn't automatically copy the color to the clipboard when you pick it (or at least the option to toggle same).

  • Dato
    Calendar app that lives in your menubar, using your regular system accounts. Menubar calendar is a big thing for me (RIP Fantastical after their ridiculous price increase), but the low-key star of the show is the "full-screen notification." Basically, I have it set up so that 1 minute before every virtual meeting I get a full-screen takeover that tells me the meeting is Happening. No more "notification 5 minutes before, try to do something else real quick then look up and realize 9 minutes have passed." ESSENTIAL. ($10)

  • iTerm2
    I've always been fond of Quake-style terminals, so much so that unless I'm in an IDE it's all I'll use. iTerm lets a) remove it from the Dock and App Switcher, b) force it to load only via a global hotkey, and c) animate up from whatever side of the screen you choose to show the terminal. A+. I tried WarpAI for a while, and while I liked the autosuggestions, the convenience of an always-available terminal without cluttering the Dock or App Switcher is, apparently, a deal-breaker for me.

  • Karabiner Elements
    Specifically for my laptop when I'm running without my external keyboard. I map caps lock to escape (to mimic my regular keyboards), and then esc is mapped to hyper (for all my global shortcuts for Raycast, 1Password, etc.).

  • NextDNS
    Secure private DNS resolution. I use it on all my devices to manage my homelab DNS, as well as set up DNS-based ad-blocking. The DNS can have issues sometimes, especially in conjunction with VPNs (though I suspect it's more an Apple problem, as all the options I've tried get flaky at points for no discernible reason), but overall it's rock-solid. ($20/year)

  • NoTunes
    Prevents iTunes or Apple Music from launching. Like, when your AirPods switch to the wrong computer and you just thought the music stopped so you tapped them to start and all of a sudden Apple Music pops up? No more! You can also set a preferred default music app instead.

  • OMZ (oh-my-zsh)
    It just makes the command line a little easier and more pleasing to use. Yes, you can absolutely script all this manually, but the point is I don't want to.

  • Pearcleaner
    The Mac app uninstaller you never knew you needed. I used to swear by AppCleaner, but I'm not sure it's been updated in years.

  • Raycast
    Launcher with some automation and scripting capabilities. Much better than spotlight, but not worth the pro features unless you're wayyyy into AI. Free version is perfectly cromulent. Alfred is a worthy competitor, but they haven't updated the UI in years and it just feels old/slower. Plus the extensions are harder to use.

  • Vivaldi
    I've gone back to Safari as my daily driver, but Vivaldi is my browser of choice when I'm testing in Chromium (and doing web dev in general. I love Safari, but the inspector sucks out loud). I want to like Orion (it has side tabs!). It keeps almost pulling me back in but there are so many crashes and incompatible sites I always have to give up within a week. So Safari for browsing, Vivaldi for development.

Still waiting for that SQL UI app that doesn't cost a ridiculous subscription per month. RIP Sequel Pro (and don't talk me to about Sequel Ace, I lost too much data with that app).

At some point companies and orgs are going to learn that when you attune so sharply to the feedback loop, you only hear the loudest voices, who are usually a small minority. If you only cater to them, you’re dooming yourself to irrelevance.

This post was brought to you by my formerly beloved TV series Below Deck

I've recently been beefing up my homelab game, and I was having issues getting a Gotify secure websocket to connect. I love the Caddy webserver for both prod and local installs because of how easy it easy to configure.

For local installs, it defaults to running its own CA and issuing a certificate. Now, if you're only running one instance of Caddy on the same machine you're accessing, getting the certs to work in browsers is easy as running caddy trust.

But in a proper homelab scenario, you're running multiple machines (and, often, virtualized machines within those boxes), and the prospect of grabbing the root cert for each just seemed like a lot of work. At first, I tried to set up a CA with Smallstep, but was having enough trouble just getting all the various pieces figured out that figured there had to be an easier way.

There was.

I registered a domain name (penginlab.com) for $10. I set it up with an A record pointing at my regular dev server, and then in the Caddyfile gave it instructions to serve up the primary domain, and a separate instance for a wildcard domain.

When LetsEncrypt issues a wildcard domain, it uses a DNS challenge, meaning it only needs a TXT record inserted into your DNS zone to prove it should issue you the server. Assuming your registrar is among those included in the Caddy DNS plugins, you can set your server to handle that automatically.

(If your registrar is not on that list, you can always use

certbot certonly --manual

and enter the TXT record yourself. You only need to do it once a quarter.)

Now we have a certificate to use to validly sign HTTPS connections for any subdomain for penginlab.com. You simply copy down the fullchain.pem and privkey.pem files to your various machines (I set up a bash script that scps the file down to one of my local machines and then scps it out to everywhere it needs to go on the local network.)

Once you have the cert, you can set up your caddy servers to use it using the tls directive:

tls /path/to/fullchain.pem /path/to/privkey.pem

You'll also need to update your local DNS (since your DNS provider won't let you point public URLs at private IP addresses), but I assume you were doing that anyway (I personally use NextDNS for a combination of cloud-based ad-blocking and lab DNS management).

Bam! Fully accepted HTTPS connections from any machine on your network. And all you have to do is run one bash script once a quarter (which you can even throw on a cron). Would that all projects have so satisfying and simple a solution.

I'm definitely not brave enough to put it on a cron until I've run it manually at least three times, TBH. But it's a nice thought!

Re: Apple’s convoluted EU policies

It's surprising how often D&D is relevant in my everyday life. Most people who play D&D are in it to have fun. They follow the rule - not just the letter of the law, but the spirit.

But every once in a while you'll encounter a "rules lawyer," a player who's more concerned with making sure you observe and obey every tiny rule, punish every pecadillo, than actually having fun.

All the worse when it's your GM, the person in charge of running the game.

But there's one thing you learn quickly - if someone is trying to game the rules, the only way to win (or have any fun) is play the game right back.

For smaller/mid-tier devs, if you're only offering free apps you should probably just continue in the App Store.

But for larger devs who might run afoul of the new guidelines where apps distributed outside the App Store get charged a fee every time they go over a million users?

Oops, Apple just created collectible apps, where if you have Facebook (and not Facebook2), we know you got in early. Think about it: Same codebase, different appId. The external app stores can even set up mechanisms for this to work - every time you hit 999,000 installs, it creates a new listing that just waits for you to upload the new binary (and switches when you hit 995K). Now your users are incentivized to download your app early, in case becomes the big thing. Lower app # is the new low user ID.

If I'm Microsoft, I'm putting a stunted version of my app in the App Store (maybe an Office Documents Viewer?) for free, with links telling them if they want to edit they have go to the Microsoft App Store to download the app where Apple doesn't get a dime (especially if Microsoft uses the above trick to roll over the app every 995K users).

Even in the world where (as I think is the case in this one) Apple says all your apps have to be on the same licensing terms (so you can't have some App Store and some off-App Store), it costs barely anything to create a new LLC (and certainly less than the 500K it would cost if your app hits a million users). Apple's an Irish company, remember? So one of your LLCs is App Store, and the other is external.

To be clear, I don't like this setup. I think the iPhone should just allow sideloading, period. Is all of this more complicated for developers? Absolutely! Is the minimal amount of hassle worth saving at least 30% percent of your current revenue (or minimum $500K if you go off-App Store)? For dev shops of a certain size, I would certainly think so.

The only way to have fun with a rules lawyer is to get them to relax, or get them to leave the group. You have to band together to make them see the error of their ways, or convince them it's so much trouble it's not worth bothering to argue anymore.

Yes, Apple is going to (rules-)lawyer this, but they made it so convoluted I would be surprised if they didn't leave some giant loopholes, and attempting to close them is going to bring the EU down on them hard. If the EU is even going to allow this in the first place.

I'll be hitting the lecture circuit again this year, with three conferences planned for the first of 2024.

In February, I'll be at Developer Week in Oakland (and online!), talking about Data Transfer Objects.

In March, I'll be in Michigan for the Michigan Technology Conference, speaking about clean code as well as measuring and managing productivity for dev teams.

And in April I'll be in Chicago at php[tek] to talk about laws/regulations for developers and DTOs (again).

Hope to see you there!

Who holds a conference in the upper Midwest in March???

Hey everybody, in case you wanted to see my face in person, I will be speaking at LonghornPHP, which is in Austin from Nov. 2-4. I've got two three things to say there! That's twice thrice as many things as one thing! (I added a last-minute accessibility update).

In case you missed it, I said stuff earlier this year at SparkConf in Chicago!

I said stuff about regulations (HIPAA, FERPA, GDPR, all the good ones) at the beginning of this year. This one is available online, because it was only ever available online:

I am sorry for talking so fast in that one, I definitely tried to cover more than I should have. Oops!

The SparkConf talks are unfortunately not online yet (for *reasons*), and I'm doubtful they ever will be.

WordPress 6.2.1 changelog:

Block themes parsing shortcodes in user generated data; thanks to Liam Gladdy of WP Engine for reporting this issue

As a reminder, from Semver.org:

Given a version number MAJOR.MINOR.PATCH, increment the:
1. MAJOR version when you make incompatible API changes
2. MINOR version when you add functionality in a backward compatible manner
3. PATCH version when you make backward compatible bug fixes

As it turns out, just because you label it as a "security" patch doesn't make it OK to completely annihilate functionality that numerous themes depend on.

This bit us on a number of legacy sites that depend entirely on shortcode parsing for functionality. Because it's a basic feature. We sanitize ACTUAL user-generated content, but the CMS considers all database content to be "user content."

WordPress is not stable, should not be considered to be an enterprise-caliber CMS, and should only be run on WordPress.com using WordPress.com approved themes. Dictator for life Matt Mullenweg has pretty explicitly stated he considers WordPress' competitors to be SquareSpace and Wix. Listen to him.

Friends don't let their friends use WordPress

Rarely is the question asked, "Is our children tweeting?" This question is likely nonexistent in journalism schools, which currently provide the means for 95+ percent of aspiring journalists to so reach said aspirations. Leaving aside the relative "duh" factor (one imagines someone who walks into J101 without a Twitter handle is the same kind of person who scrunches up his nose and furrows his brow at the thought of a "smart ... phone?"), simple (slightly old) statistics tell us that 15% of Americans on the Internet use Twitter.

(This is probably an important statistic for newsrooms in general to be aware of vis-a-vis how much time they devote to it, but that's another matter.)

For most journalism students, Twitter is very likely already a part of life. Every introduction they're given to Twitter during a class is probably time better spent doing anything else, like learning about reporting. Or actually reporting. Or learning HTML.

I know this idea is not a popular one. The allure and promise of every new CMS or web service that comes out almost always includes a line similar to, "Requires no coding!" or "No design experience necessary!" And they're right, for the most part. If all you're looking to do is make words appear on the internet, or be able to embed whatever the latest Storify/NewHive/GeoFeedia widget they came out with, you probably don't need to know HTML.

Until your embed breaks. Or you get a call from a reader who's looking at your latest Spundge on an iPad app and can't read a word. Or someone goes into edit your story and accidentally kills off a closing </p> tag, or adds an open <div>, and everything disappears.

Suddenly it's "find the three people in the newsroom who know HTML," or even worse, try to track down someone in IT who's willing to listen. Not exactly attractive prospects.

Heck, having knowledge of how the web works would probably even help them use these other technologies. Not just in troubleshooting, but in basic setup and implementation. In the same way we expect a basic competence in journalists to produce their stories in Word (complete with whatever styles or code your antiquated pagination system might prescribe), so too should we expect the same on digital.

Especially in a news climate where reporters are expected as a matter of routine to file their own stories to the web, it's ludicrous that they're not expected to know that an <img> tag self-closes, or even the basic theory behind open and closed tags. No one ever did their job worse because they knew how to use their tools properly.

I'm not saying everyone needs to be able to code his or her own blog, but everyone should have a basic command of their most prominent platform. It's time we shifted the expectations for reporters from "not focused entirely on print" to "actually focused on digital."

Thanks to Elon, no asks if our children are tweeting anymore. There's a big advantage in learning how to use all your tools properly, even if it doesn't seem like it.