As I've mentioned before, we're moving away from Caspio as our database provider to the extent that it makes sense (not out of utility, it's a function of cost). While we've managed to get some things migrated over, one of the biggest stumbling blocks are the things we use Caspio for the most — simple databases that need to be viewable and searchable online.
We have a number of semi-complex databases (read: more than a single-sheet XLS file) that we're not moving anytime soon (deed transfers database, among others, simply because of how we ingest the data), but there are a number that are little more than spreadsheets that we need to be able to view and search.
We investigated a number of vendor alternatives, but most featured pricing problems similar to Caspio, or had records limits absurdly lower than what we need. (Example: One such service offered 100,000 rows of data for $149/month. For comparison, one of our more popular databases, listing Pennsylvania teachers' salaries, has well over 2 million rows alone.) So, once again, Project Time™.
There is one thing that any aspiring programmer must realize when they set out to replace a tool: YOU CAN'T REPLACE A TOOL AT THE HEART OF A MULTI-MILLION DOLLAR CORPORATION ON YOUR OWN. I knew this academically but, as is often the case when setting out on these adventures, my brain chose to heed that advice only when it was convenient to do so.
I often live by the mantra, "If someone else can do it, that means it's possible." It works well something like 75 percent of the time — it prevents me from feeling daunted when facing large projects, but it can be turned around as well.
My favorite caveat is, "Technically, I could build you a reasonable facsimile of Facebook — it just wouldn't be as good, fast or as useful as the real thing."
It's true in that somebody built Facebook, but (more accurately) thousands of somebodies built Facebook. It's doable, it's just not feasible for one person to replicate it completely on their own.
That being said, Past Me was convinced it couldn't be THAT difficult to take a spreadsheet and present it online, despite the fact that people routinely pay up to and including hundreds/thousands of dollars per month to companies to be able do exactly that.
Ah, hubris.
The first priority involved figuring out how to store the data. The reason the York Daily Record likes Caspio so much is not just its versatility and usefulness, it's how easy it is to use. Caspio spent a lot of time and money into figuring out an interface that, while not everyone can use it and even fewer can take full advantage of all its features, it's easy enough that most people can do basic things with little training. This actually posed the greatest challenge — the data needed to be able to be input and edited in such a way that your average reporter (think 35-year-old metro reporter, not 23-year-old working at The Verge) would be able to do so without having to email/call me every five minutes. That ruled traditional databases out right away. (Which is not to say that you can't build an edit-friendly MySQL frontend, but I didn't have that kind of build time for this project.)
The easiest and cheapest way forward seemed to be (as ever) through Google. Though I'm becoming more wary of Google Docs' live-editing capabilities, for the purpose of "storing data and being able to edit it directly," Sheets fit the bill.
Because our CMS does not allow for server-side code inclusion (another story for another time), inserting the data into articles needs to be accomplished via JavaScript drop-in. Since we're going to be building it in JS anyway (and I'm a firm believer on not doing the same work twice unless I forget to commit something to the repository), I figured we'd just use one codebase for both the widget version and the standalone.
After a little bit of searching (I got burned out going through a dozen different Caspio alternatives), I settled on DataTables as our jQuery plugin of choice.
Here's the part where I always have trouble when trying to relate the struggles of the average newspaper's newsroom to the more digital-focused newsrooms who have multiple app developers and coders on staff — most newspaper reporters do not have the coding ability beyond making a link or typing into the TinyMCE in WordPress.
You can get them to do things like a YouTube embed using a tag interface [Youtube: https://www.youtube.com/watch?v=jvqfEeuRhLY], but only after some heavy-duty brainwashing (and we still struggle with getting Excerpts right).
So while I and probably three or four in our newsroom have no problem using Quartz's excellent ChartBuilder, it's not something we can just send out to the general population with a "use this!" subject line and expect results.
While some might be content with a simple "Use DataTables!" and inserting some code to auto-activate the tables when people set them up properly, asking your average journalist to use JavaScript parameters is a fool's errand, and we're not even within driving distance of, "Oh yeah, and get your Sheet into JSON for DataTables to use."
Which is not to call them stupid — far from it. It's just that these are people who spent a bunch of time (and, likely, money) to learn how to write stories properly. Then they got to work anytime after 2005 and discovered that it wasn't enough — they have to learn Twitter, Facebook, an ever-increasing number of content managements systems and (oh yeah!) they still have to do it while writing their stories. All of this is doable, of course, but to ask them to learn HTML and JavaScript and every new thing someone invents (which even I have given up all hope of keeping up with; there are just too many new things out there) is simply untenable.
Thus, I consider it my number one job to make their jobs easier for them, not just give them something complicated they have to learn just because it does a new thing (or an old thing in a cooler/cheaper way).
For the first version, it's about as simple as can be. People work on their data using their own preferred Google accounts (work or personal), leaving them with a document they can play around with. Once they're to a point where they're ready to present the data to the public, we copy the data into a separate account. This has the advantage of a) keeping the data under our control, in case the reporter quits/leaves/dies/deletes their account, and b) allows the reporter to keep their own copy of the data with the fields they don't want shown to the public (internal notes, personally identifying information, that sort of thing). The reporter then grabs the sheet ID from the URL and puts it in the tool.
Assuming the data passes some very basic tests (every column has a header, only one header row, etc.), they're presented with a list of fields. Because our CMS frontend does not allow for responsive design, all our information lives in 600 pixel-wide boxes. So with a little help from jQuery Modal, I added some functionality to DataTables using the standard hidden fields that hides some columns in the standard presentation, but shows the entire entry's information in a modal if a row is clicked.
For version 1, search is pretty simple: If there's a field, it's searchable. We're hoping to expand on that in later iterations to not search certain fields, as well as create some method of specifically searching fields (as seen in this Caspio implementation). Users then add a title (shown only in the full version; we're assuming wherever the widget drop-in goes, there's already a headline on the article) and customized search text.
They're then taken back to the main screen, where they can find links to the full data page (like this, which we use for our mobile implementation (neither our apps nor our mobile site executes JavaScript, so we always have to include links to a place off our main domain for our mobile readers to view) as well as the drop-in widget code.
Eventually, we hope to add some things like the extended search functionality, a "download data" option and other enhancements. But for now, we feel like we have a tool for basic database work.
10 years later, the projects for the GameTimePA URLs are still live and running, but the main newspaper's domain isn't. But they're pointing to the same server!
It all started with FlappyArms.sexy. For those not in the know, it’s an experiment by the NYTimes’ Alastair Coote to clone FlappyBird — the twist being that, instead of using arrow keys or swipes on a phone, you load the game in a desktop/laptop browser, then connect to it with your phone.
Using the sensors in your phone, it detecs when you flap your arms and moves the bird accordingly. I came across it when he tweeted out a link, and immediately played it for an hour.
About a week later, Managing Editor Randy Parker dropped by to ask what I was going to do at our booth at the 2014 edition of the York County Fair. Previously, reporters and editors used their time at the booth to connect with the community in their own ways. Politics reporters might interview a politician live, our graphic artist offered up sketches one year, and this year our photo editor planned a photo walk, taking members of the public around the fair and explaining some of the basic concepts of photojournalism (and helping them compose great shots). Parker specifically said he wanted to make sure that people were doing something that really spoke to what they did/their interests.
I wasn’t lying when I replied with, “Well, the only thing I can think of doing is throwing up FlappyBird and showing people the possibilities of technology.” He even would have let me go along with it, too, I bet.
Then Community News Coordinator Joan Concilio told me about an idea they had for the fair. They envisioned a setup whereby people could tell us the things they thought that made York County special, then display them on a big screen throughout the fair.
Show people what journalism is, what interactive journalism can be. Show them it’s not all “a reporter shows up, talks to people, goes away and later something appears on the website/in the paper.” Show them that journalism can be curation from the public, soliciting input and feedback instanteously, that comes together in a package with our deep knowledge and library of photos of the area.
And I thought, “Damn. That sounds like FlappyArms.sexy, except actually relevant to journalism. I gotta get in on that.”
Together on a Tuesday, we worked out that we’d need a submission form and a display (pictured above and below) for the answers, a curated set of photos from our archives and the #yorkfair feed from Instagram. They also wanted to incorporate it long-term into their blog, Only in York County, which we did here. Oh, and the Fair started Friday morning.
Everything actually went fairly quickly. After looking at a number of jQuery image slider plugins, I ultimately wound up building my own owing to the fact that a) none of them did full-screen very well, since the plugins were by and large designed to work on actual sites, not what amounts to a display, and b) I wanted to be able to insert the newest answers immediately, if I had time to build the feature.
We could have done a quick-and-dirty build that was tech-heavy in operation, but we wanted to leave the display/capture running even when we weren’t there, and that required making things a little more user-friendly. The data was stored in Google Sheets (something we’re likely to move away from in the future, as I ran into a number of problems with Google Apps Scripts’ ability to work with selected cells on a sheet. That bug in and of itself isn’t a huge problem, but that it hasn’t been addressed in so long is worrisome in the extreme), with a custom function for updating or deleting entries (since we were using push and not refreshing the page).
The Instagram API was, as ever, a dream to work with, and a cinch to pull stuff in (cited and referenced back to Instagram properly, of course). Even the part I was worried about, the Push notification, was a cinch to institute thanks to Pusher. Highly recommended, if you can afford it — we could, because this required a relatively small number of push clients open (just the display computer + anything I was testing on at a given time, so we used the sandbox plan). There are a number of self-hosted open-source options — though, if we have need of one and I can’t convince them to pay for Pusher, I’m going to consider Slanger, which uses the Pusher libraries. (Seriously, cannot push Pusher enough).
In fact, the biggest challenge of the buildout was how to handle multiple push notifications that came in either at the same time or relatively close to each other. The easiest route was to just have the second message override the first, the third push out the second, etc. But the entire point of the exercise was to show people that they could be a part of the journalism immediately, and we didn’t want to discourage multiple people from submitting at once.
Thus, the dequeue() function was born — on the first submission, set a timeout that will restart the interval that was paging through the extant items. If a push comes in while that timeout is set, queue the data, get the time remaining, set a new timer (same variable) for the time remaining to fire dequeue again. If no new pushes come before then, take the item out of the queue, use it, and set a new timer to dequeue again (if there’s anything else in it) or restart your main action if there’s not.
It was what you’d call a “hard-and-fast” deadline: Our contract with Caspio for database and data services was changing on July 1. On that day, our account — which to that point had been averaging something like 17GB transferred per month — would have to use no more than 5GB of data per month, or else we’d pay to the tune of $50/GB.
Our biggest data ab/user by far was our user-submitted photo galleries. A popular feature among our readers, it allowed them to both upload photos for us (at print quality) to use in the paper as well as see them online instanteously. Caspio stored and displayed them as a database: Here’s a page of a bunch of photos, click one to get the larger version.
We had to come up with something to replace it — and, as ever, without incurring m/any charges, because we don’t have any money to spend.
Requirements
-
Allow readers to upload photos (bonus: from any device, previously limited to desktop)
-
Store photos and accompanying metadata (name, address, contact info, caption, etc.)
-
Display photos and selected metadata (name, caption) on multiple platforms
-
Allow for editing/deletion after upload
-
Low/no startup or ongoing costs
-
Support multiple news properties without much cost for scaling
-
DO NOT create additional work
Research
There are a number of image hosts out there, of course, but the terms of use on their accounts vary wildly. The two main hosts we looked into were Flickr and Photobucket. Photobucket had the advantage of being Not Yahoo, which was a plus in my eyes, but their variable pricing structure (not conducive to multiple accounts, difficult to budget for the future) and lack of apparent developer support (the page you’re directed toward to set up an account no longer exists) made that seem unwise.
Flickr offers 1 TB of storage for reasonable pricing, but a hard request limit (3600/hour) and reasonable usage request (“You shall not use Flickr APIs for any application that replicates or attempts to replace the essential user experience of Flickr.com”) kind of limited its appeal to use a gallery host. Well, there went that idea. Then we started looking at resources we already had.
A few years ago, Digital First Media provided its news organizations with the nifty MediaCenter installations developed at the Denver Post. MediaCenter is an SEO-friendly, easy-to-use WordPress theme/plugin combo that stores its data in SmugMug, another photo storage site we’d looked at but abandoned based on price. But, you see, we already had an account. An in. (A cheap in, to the delight of my editor.) Once we clarified that we were free to use the API access, we decided to do what the pros do: Build what you need, and partner for the rest. Rather than build out the gallery functionality, we’d just create SmugMug galleries and MediaCenter posts, and direct uploaded photos there.
Challenges
The official SmugMug API is comprehensive, though … somewhat lacking in terms of ease of use. Luckily, someone created a PHP wrapper (PHPSmug), which works, more or less. (There are a few pitfalls, in terms of values not corresponding and some weirdness involving the OAuth procedure, but it’s all work-through-able.)
The whole point of user-generated photos is that you want to have the content live forever on the web, but keeping 400 “Fourth of July”-esque-specific categories around in the upload list is going to frustrate the user. We decided to treat categories in two ways: Active and Inactive. Once you create a gallery, it never goes away (so it can live on in search), but you can hide it so it doesn’t necessarily jump in the user’s face all the time.
Print workflow was especially important to us, as one of the major goals of the system was to not create additional work. Due to circumstances out of my control, the server we have to work with does not have email functionality. Using a combination of Google Scripts and some PHP, we weaseled around that limitation and email the original uploaded photo to our normal inbox for photo submissions, thus not forcing the print workflow to require using the web interface.
Allowing uploads from mobile devices is almost a cinch since both Android and the later flavors of iOS support in-browser uploads. The whole thing was built off responsive Bootstrap, so that was the easiest part of the whole project.
One of the biggest reasons we have a photo uploader and web gallery in the first place is to reassure people that when they submit a photo to us, we received it. This helps to prevent a deluge of phone calls or emails inquiring whether we in fact received the photo and when we plan to run it. Having the web gallery gives the user instant notification/gratification, and allows us to remind them gently that we don't have the space to print every photo we receive — but you can certainly view them online.
Method
On the backend, we have one database containing three cross-indexed tables — one to hold authentication info (per property), one for the category info and one for the photos themselves. Because we're using SmugMug as the storage system, there's no need to hold the actual photo ourselves (which helps with data usage from both a storage and transfer perspective). All the photo storage table has to hold is the information for retrieving it from SmugMug.
The user navigates to a specific property's upload form, fills it out and uploads the photo. The component parts of the form are stored separately as well as combined into our standard user-caption format. The caption is used when we send the photo to SmugMug, but we also store it locally so we can sync them up if changes need to be made. The photos are directed to the gallery specified by the user.
After a certain amount of time (about 5 minutes on SmugMug's end, and anywhere from 15-30 minutes on our gallery's end because of the massive caching it was designed with), the photo automatically appears on our photo gallery site. From the backend, users are able to create or retire categories, edit photo caption information and delete photos.
There's hope that we'll be able to do things like move photos around or create archive galleries, but that's down the road, if we have the time.
Results
You can view the final product here, here, here or here (spoiler alert: They’re almost exactly the same). There are still features we’d like to add, but there were more fires to put out and we had to move on. Hopefully we can come back to it when things settle down.
My first big in-house migration to save money!
It’s disdainful in some circles to come out and say this, but there are places in journalism for automatic writing. Not the Miss Cleo kind, mind you, the kind done by computers. This is not a new trend (though news organizations, as ever, think things are invented only when they notice), but it’s received increasing notice given the continued decline of the economic status of most news organizations coupled with some high-profile examples.
The most recent was for the Shamrock Shake in LA, when an LA Times “quakebot” generated a story on the quake three minutes after it happened.
Whenever an alert comes in from the U.S. Geological Survey about an earthquake above a certain size threshold, Quakebot is programmed to extract the relevant data from the USGS report and plug it into a pre-written template. The story goes into the LAT’s content management system, where it awaits review and publication by a human editor.
Where many can (and did) look upon this story only to gasp in horror and pull their hair out in despairing hunks, I saw this and thought, “Huh. That sounds like a pretty perfect system.” Imagine no quakebot existed, and an earthquake happened. The first thing a modern news organization does is get a blurb on their site that says something to the effect of “An earthquake happened.” This then gets shared on social media.
Meanwhile (if the organization is doing it right — if not, this happens in sequence), a reporter is calling the USGS or surfing over to the web page, trying to dig up the relevant information. They will then plug it in to a fairly formulaic story (“The quake was x.x on the Richter scale, with an epicenter there about 2 miles deep. It was felt …”.) If they can get ahold of a geologist who isn’t busy (either geologisting [as we would hope, given that an earthquake just happened] or on the phone with other media outlets), you might get a quote along the lines of, “Yup, there definitely was an earthquake. There will probably be aftershocks because there usually are, although we have absolutely no way of knowing for certain.”
What’s the difference between the two stories, aside from the fact that one showed up much faster? Data-based reporting absolutely falls into my crusade to automate all tasks that don’t actually require a human. The non-computer method of initial reporting on the quake is completely identical to the automated method, except it a) takes less time and b) frees up a reporter to go do actual reporting that a computer can’t do.
The computer can’t make a qualitative assessment on how it’s affecting peoples’ moods, or how anxious people are about aftershocks. Reporters should be out talking to people, rather than querying a computer to get data that another computer can easily understand and process.
Perhaps the most cogent argument against computer-generated stories is the potential proliferation of such content. After all, one might argue, if every California news outlet had a quakebot, we’d have dozens of stories that all said the same thing without reporting anything new.
(This is me laughing quietly to myself. This is the sound of everyone waking up to the current problem with media when you no longer have a geographic monopoly thanks to the internet.)
No one is saying that all stories, or even most will be written by computers, but it’s not difficult to imagine that a good number of them will be simply because most stories today have significant chunks that aren’t deeply reported. They’re cribbed from press releases, interpreted from box scores or condensed from the wire. If we leave the drudge work to the computers, we can free up reporters to do things that computers can’t, and actually producing more, better content. It’s quite literally win-win. The primary losers are those companies who will buy too deeply into the idea that they can generate all their content automatically.
I still wholeheartedly think that entirely generated content is essentially useless to end-users.
In the darkest corner of the newsroom, bounded on one wall by library-style bookshelves and a long cubicle on the other, there sit two computers. They’re stacked vertically, attached to the same LCD (how fancy!) monitor via a KVM switch.
They sit and hum, silently when they’re first booted up and much louder after any length of time, and one of them grinds horrendously when it tries to seek information from the deepest recesses of its brain, much like me when someone asks a question during WWE RAW. They are vestiges. Relics. Antiquated reminders of the 20-plus-year old system we recently dumped in favor of a new (CLOUD-BASED, we’re so hip!) publishing system.
Together, they jointly ran the vast majority of our automated processes, barely doing together what even a relatively modern machine could do with ease all on its own. Make no mistake, automation is our mantra at the York Daily Record. We don’t want to make people do what robots (/machines) could and/or should be doing. To that end, we have a couple big projects in the hopper in addition to a seemingly endless series of smaller ones that crop up and are dealt with in the course of a day or two.
But the loud, imminent demise of Automator (the name of the program we used to schedule and task) meant that the project was getting pushed to the front of the line. Since we were replacing, we wanted to at least modernize the computer (running Windows 2000 since the old client could go no higher), and hopefully the program.
Since most of the work is now handled in the cloud, filing photos served as the big workflow we wanted to tackle. With the advent of mobile journalism, it’s not uncommon to want photos from the photographers at the scene. Unfortunately, our current setup required a VPN into our local server, then an upload to a drop folder that got pushed to the server. All that effort only took care of the print end, and required a laptop to get the particular flavor of VPN working properly.
What we wanted was an easy way to get photos from any device (photographers frequently work using only their phones or tablets, because it’s one less and/or lighter piece of equipment they have to lug around versus a laptop) and push it to three places — the web, print and our archive. The simplest solution seemed to be getting the file into our system and then moving it around from there.
Enter Dropbox. It’s extraordinary how even free services can do what used to require expensive services that were frequently more unreliable. Using the free 2GB Dropbox plan, we made sure that all of the devices were syncing to the same account, as well as to the “new” automater machine.
(Since a new AutoMate license is somewhere between $995–1495, we grabbed an old 10.6.8 iMac that was lying around and pressed it into service.)
After spending the better part of a day getting Apple’s Automator program to do all of the steps I wanted, four hours of testing proved enough to determine that Folder Actions, succinctly, suck. They were frequently skipping files and then just letting them sit, or worse yet failing and still moving them on. Luckily, a $28 program called Hazel is like Folder Actions, except it actually works. Highly recommended. That, plus the $5 Yummy FTP Watcher, resulted in us having a robust system for filing from the field that’s a) easy for photogs to use, and b) results in us getting the quality of photos we need in the places we want.
This would be much easier nowadays, as you'd just have a cloud-based Digital Asset Management system, but the budget would also be MUCH higher.
I get why corporations love control. I do, really. The idea that some mere employee, someone whose livelihood depends upon your beneficence, holding the keys to your kingdom in their hands with no external controls? Quick, someone fetch the enterprise fainting couch!
For the most part, enterprises have started to see the value in giving their employees more freedom in terms of things like flex time or BYOD policies. Requiring everyone to use Internet Explorer 6 (for example) only led to a) increased insecurity for those who refused to use inferior products and had to develop workarounds and b) productivity slowdowns for those not able or too lazy to circumvent the system.
But again, that pesky thing where companies refuse to trust their employees rears its ugly head, and now the answer is apparently Snapchat. For enterprise. No, really.
Again, I understand the basic impetus behind this line of thinking, but it fails on two levels, both of them human. One: If you make it in the employee’s best interest to not share vital strategic or business information with a competitor, that employee (provided he/she is acting rationally) will not do so. This worry is, at heart, an admission that a company is not providing its employees with the proper incentive to act against the company.
One (sane) angle of approach would be to properly incentivize your employees, but increasing reliance on and faith in technology over humans (Ibid.) has rendered this a nonstarter. That very reliance, however, is also this policy’s downfall.
The article provides three strategies:
-
Time bombs (Snapchat)
-
Barriers (geofencing)
-
Biometrics
Let’s get through them quickly. As any teenager (or Google search will tell you, Snapchat’s ability to have your photos deleted only works as long as the other party wants them to. Otherwise, one quick screenshot (or app or API call or any of a dozen alternatives) is all it takes to have that naked selfie float around Reddit forever.
So Option 1 works as long as every other advantage that computers offer (universal access, instantaneous/error-free/non-destructive copying, etc.) goes away. Which seems unlikely.
Geofencing! IT administrators can know EXACTLY where your device is and limit your access there. Of course, if you can look at it somewhere, you can also copy it. Because, again, computers. And if you can copy it, you can convert it (either automatically or manually — and though it may take more time, I doubt that anyone who’s letting/helping documents out the door is going to be deterred by a little hassle). And then your fancy geofencing looks a lot more like the actual US-Mexico border fence than you probably intended.
Biometrics. Exact same arguments as above. Then you get hit by the double whammy that implementing these types of policies tends to make the end users (up and down the chain) more lax about security, because they’re putting their faith in the technologies — which have hidden dependencies and assumptions that most people don’t bother to think through, and ultimately wind up being their downfall.
Interestingly, the second and third policies require the utmost amount of trust in the employee (‘You can look at this only in these locations except please don’t then share it’) that the first one explicitly tries to limit ("You can only view this for X amount of time before it self-destructs). And you're employing the most fantastic way of breeding resentment (and therefore increasing the likelihood of the leaks you're trying to prevent) — show someone you don’t trust them in the slightest.
The quickest, easiest, cheapest and most secure form of information control is always going to be hiring, trusting and training the right people. It seems like a lot of work up front, but the weakest link in any chain of security is always the human element. And the smarter/more alert those people are to the risks, the easier it is for them to mitigate tricky situations. There’s no app for that.
Unfortunately, the ubqiquity of surveillance capitalism has pushed people strongly in the direction of control over trust.
The whistle sounds, the kick is up and, just like that, football season is upon us. Most newspapers throughout the years produced some kind of high school football preview, which pretty perfectly meets the sweet spot of subscriber interest coupled with advertising dollars. Moving that over to the digital realm has been a bit more difficult, at least for us.
Our (corporately homegrown) CMS doesn't really do well with one-off tabs short of creating a brand-new section, so previously the only items making the jump from print to digital were the tab stories, as stories. Last year we changed that trend with an iPad-only app we produced using Adobe's Digital Publishing Suite.
With help from a corporate deal, we wanted to explore the ways that an app could help us present our content. At the time of creation, there were options for more device-agnostic profiles, but the way the DPS deal was set up we could produce the iPad app for free; anything else incurred a per-download charge (being a free download, we weren't ready to lose money on the basis of popularity). We were all pretty happy with the way the product turned out, but were disappointed by the limitations. The iPad-only specification severely limited its potential audience, and the fact that none of it was indexable or easily importable made it feel more like producing an interactive PDF than a true digital product. Though we were satisfied with the app, we determined in the future we'd likely steer clear of the app-only route.
Planning
When we decided we wanted to do the preview again for this year, everyone was in favor of going with a responsive design — it allowed for the maximum possible audience as well as the smallest amount of work to hit said audience. The only problem was that our CMS doesn't support responsive design, so we'd have to go around it.
This problem was compounded when we decided on the scope of the project. Our high school football coverage is run by GameTimePA, which consists of the sports journalists from the York Daily Record, Hanover Evening Sun, Chambersburg Public Opinion and Lebanon Daily News. The four newsrooms are considered a "cluster," which means that we're relatively close geographically and tend to work together. Since the last preview, however, GameTimePA had expanded to include our corporate siblings in the Philadelphia area, meaning we now encompassed something like 10 newsrooms stretching from Central Pennsylvania to the New Jersey border.
And we're all on different CMSes.
One of the few commonalities we do share are Google corporate accounts. Though our corporate policy does not allow for publishing to the web or sharing publicly (another rant for another time), it at least gives us an authentication system to work with.
By now, there's a fairly defined set of content that goes into the tab.
There are league-specific items (preview, review, players to watch) and team-specific ones (story, photo, writeup, etc.). Starting to sound like a data table to you yet? By the time we finished, we actually ended up with some fairly robust sheets/tables for things that would generally fall under the category of "administration." But the content was only half the problem. Translating it into the final product still loomed ahead of us. Because we only have one server we can use, ever (thanks, zero dollars to spend on tech!), it couldn't be too resource-intensive — I honestly worried that even using PHP includes to power that many pageviews would overtax it.
Since the site is a preview, it's not going to be updated that often, negating the primary downside of a flat-file build system (longer time to publish). I've mentioned before that we've previously built off of Bootstrap, but the limitations we kept hitting in terms of templating (many elements require specific, one-off classes and styles to work right on all devices) drove us looking in another direction.
The framework that seemed most complete and contained the elements we were looking for was Zurb's Foundation. Though it was not without its own headaches (Foundation 5 is built off an old version of SASS, which can play hell with your compiler — the solution is to replace the deprecated global variables, specifically replacing !default; with !global; and replacing if === false statements with if not statements, as outlined in an answer here. Zurb says they're rewriting the SASS for F6), it ultimately worked out for us.
Build
The basic method for extracting data from the Google Docs turned out both easier and more difficult than expected. The original plan was to query the two main admin sheets (that described the league structures as well as the league pages) and go from there.
That much was easy — I wrote a Google Apps script that I granted access to my Docs that outputs some customized JSON based on which pages are queried.
A PHP build script (which can be set to rebuild the whole thing, a whole league, or a league's teams or league pages) grabs that info, then goes back and grabs the data for the queried pages. It's a lot of calls (hence why each update is referred to as a "build," so that the content desk would understand that this is not a WordPress post they're updating), but the most important thing was to keep the content creation and updating as easy possible — I can convince editors to go back and edit their typos in a Google Doc, whereas it's much more difficult to convince them to dig into an HTML file to find their errors without creating more problems. The PHP script outputs partial templates based on the type of page — again, in the interest of not wanting to have to rebuild the whole app every time a small change is made, I didn't want to rely on the PHP scripts to build everything — they're strictly for extracting data in a sensible manner.
The PHP script outputs a combination of JSON and .kit files. .kit is the file extension for CodeKit 2's .kit language (I heartily recommend CodeKit2 for web devs, by the way), which is essentially PHP includes for HTML. This worked perfectly for our plans, since it allowed the major parts of the templates to be kept in a single location without having to literally regenerate the whole site (the PHP build script takes, on average, about 3-5 minutes to output the site — the .kit compile takes about 20 seconds). Dropping the .kit files into the build folder automatically generates the static HTML files in a different directory, and the site is ready to go.
Challenges
Aside from the obvious challenges of just getting things to work, the biggest challenge was extracting the text from the Google Docs with formatting intact. There are methods using the getAttributes method of the text class, but I could not get it to work reliably. (Of course, when I went to Google the partial answers I saw before I found a Markdown converter script that can email you the document that could be easily adapted. Damnit.)
We did not even look at, much less open the can of worms that is embedded images.
Epilogue
We're beyond happy in our decision to forego the app route in favor of responsive design — we had more visitors to the site in the first hour of its going live than we did downloads of the app to that point (more than a year later). The larger potential audience, the ability to deep-link into the site and the ease of access (get it wherever you are!) combined to make it a much bigger success. There are still a few updates we're going to get in before the start of the season, though — more teams, full rosters and some videos are still to come.
GameTimePA HS Football Preview — The actual site
The one "published" joke I've ever had was when I submitted a joke review for Codekit 3. Proud of it to this day, even more so because mine was the only joke that got through from the beta-testers.
The one "published" joke I've ever had was when I submitted a joke review for Codekit 3. Proud of it to this day, even more so because mine was the only joke that got through from the beta-testers.
It’s all Henry Ford’s fault. While it’s almost certainly true that if he hadn’t innovated the production line and interchangeable parts, someone else would have, he stands squarely in the gun sights of history when we rail against technology making humans irrelevant.
He saw that robots and automation could produce a more uniform product more efficiently, and we’ve been off to the races ever since. Computers only make it worse. Thanks to Bill Gates, even before the epidemic of big data, computers and the internet have been tried and convicted of killing the middle class, newspapers and, counter-intuitively, porn, via a variety of methods.
But the first one, the middle class, is the one I want to focus on. It’s beyond true at this point that people have lost white-collar jobs to computers. As any 10 minutes of MAD MEN will tell you, there used to be entire departments engaged in activities that today are done by one person or, at most, one team. Things like secretarial pools (for typing), mockup artists and even broad swaths of accounting have been felled by three words: Word, Photoshop and Excel.
But for the most part, that’s actually OK. Computers are designed to and should be used for streamlining everyday tasks, allowing people to work more efficiently and (because all things must have a Legitimate Business Purpose) even saving the company money by consolidating the number of employees to produce a given widget.
These are what we’ll call sensible (though regrettable) redundancies. But the problem with technological innovation is that we think that any problem, with enough sufficient amounts of tech wizardry thrown at it, will disappear.
The flaw with this philosophy is that, much as with medicine and side effects, sometimes the troubles with the cure are worse than the problem it was trying to solve.
It’s 5 p.m. You’ve come home after a long day of work and, according to Amazon’s website, your brandy new Shiny should be at the door. Amazon queried the UPS database, which confirmed that the driver had scanned the barcode on your package as having been dropped off at your home.
Yet, despite looking on the porch, peeking behind the rosebushes and checking with your neighbor, it’s nowhere to be found. Time for the phone tree.
Everyone’s dealt with phone trees. They do make sense, to a point. Why on earth would you route every single call through one (or more, depending on the size of your organization) person, who would then have to manually shift them off to the appropriate extension?
An automated greeting with options to go through for finding the person you want to reach makes perfect sense in a number of scenarios. But right now you’re waiting for a package that says it’s been delivered, even though it’s clearly not. And when you call up UPS, you damn well know that you don’t 1) Want to Ship A Package, 2) Track A Package, 3) Schedule A Pickup, 4) Inquire About Freight Services, or any of the other options the robot gives you.
You could try Tracking The Package. But you’ve already interacted with UPS system. We know that the system is wrong; it thinks the package has been delivered, when the package hasn’t been delivered. The problem is that the system has no conception that it could be wrong. All it’s ever going to be able to tell you is that the package has been delivered.
Naturally, you started mashing 0 the minute the robot asked whether you wanted to converse with it in Spanish (automatonic show-off). 0 is frequently the magic number that tells the system, “Sorry, I need to talk to an actual human being because you’re so arrogant you can’t even admit the possibility that you could be wrong.”
(Of course, the very first thing the helpful representative does is query the computer so she can tell you, “Ma'am, this says the package was delivered,” adding a third layer of the same information confirming itself, but that’s also another rant for another time.)
This is a design flaw, a self-reinforcing feedback. The system tells the website you're wrong, so when you call to inquire it checks ... the same system, and it of course agrees with itself. And the reason this is a problem is because the implementation of this automation actually makes the jobs (and lives) of humans harder. We’ve so completely bought into the superiority of computers that, faced in a real-life situation, we almost always take their word over that of a human being.
Consider how many times someone’s complained about the technology where you work. Is the software you use every day to do your job completely bug-free? Is it even designed to do the things you’re forced to do with it? Know anyone in the food service industry? Ask them about their Point of Sales system.
Think about all the customer service interactions you were involved in from the buying side that included faulty technology. How often has the employee said, “Oh, that’s clearly wrong, let me fix that.”? The best-case scenario in that situation is that someone gets sent to go check that you were not, in fact, lying when you said the shirt was on sale even though the computer didn’t realize it. Or everyone gets to cool their jets while the manager wanders over from the back of the store to enter the special “override” code that forces the computer to accept the input of the human being operating it.
All it boils down to, essentially, is that these companies trust their computers more than their employees. (Which points, frankly, to absolutely terrible HR work.) This makes sense if all you care about is hiring people you can pay a pittance who will do the bare minimum, and rely on the computers to police everything. It falls apart somewhat if you actually care about your customers not hating the experience of going to your store.
To a certain extent, they’re extending Ford’s maxim: Using computers gives them a more reliable outcome. The problem is that they don’t bother to alter course even when that outcome is awful, because they believe that hiring the people to do the task properly would be difficult, expensive or not worth the money.
Thus, homogeneity is prized over efficacy. And that’s their prerogative, I guess. After all, everything must have its Legitimate Business Purpose, and there’s no Business Purpose more Legitimate than “it costs me less money today/this month/this quarter.” And perhaps the giants of various industries (Amazon, UPS, Walmart etc.) are so entrenched — or they’ve devolved everything to the commodity level so that price is the only differentiator — that they’ll never have to worry about the upstart who innovates on service and providing a user experience that’s actually pleasant for the user. Just ask Microsoft.
This seems especially true in the age of AI.
Rarely is the question asked, "Is our children tweeting?" This question is likely nonexistent in journalism schools, which currently provide the means for 95+ percent of aspiring journalists to so reach said aspirations. Leaving aside the relative "duh" factor (one imagines someone who walks into J101 without a Twitter handle is the same kind of person who scrunches up his nose and furrows his brow at the thought of a "smart ... phone?"), simple (slightly old) statistics tell us that 15% of Americans on the Internet use Twitter.
(This is probably an important statistic for newsrooms in general to be aware of vis-a-vis how much time they devote to it, but that's another matter.)
For most journalism students, Twitter is very likely already a part of life. Every introduction they're given to Twitter during a class is probably time better spent doing anything else, like learning about reporting. Or actually reporting. Or learning HTML.
I know this idea is not a popular one. The allure and promise of every new CMS or web service that comes out almost always includes a line similar to, "Requires no coding!" or "No design experience necessary!" And they're right, for the most part. If all you're looking to do is make words appear on the internet, or be able to embed whatever the latest Storify/NewHive/GeoFeedia widget they came out with, you probably don't need to know HTML.
Until your embed breaks. Or you get a call from a reader who's looking at your latest Spundge on an iPad app and can't read a word. Or someone goes into edit your story and accidentally kills off a closing
tag, or adds an open , and everything disappears.
Suddenly it's "find the three people in the newsroom who know HTML," or even worse, try to track down someone in IT who's willing to listen. Not exactly attractive prospects.Heck, having knowledge of how the web works would probably even help them use these other technologies. Not just in troubleshooting, but in basic setup and implementation. In the same way we expect a basic competence in journalists to produce their stories in Word (complete with whatever styles or code your antiquated pagination system might prescribe), so too should we expect the same on digital.
Especially in a news climate where reporters are expected as a matter of routine to file their own stories to the web, it's ludicrous that they're not expected to know that an tag self-closes, or even the basic theory behind open and closed tags. No one ever did their job worse because they knew how to use their tools properly.
I'm not saying everyone needs to be able to code his or her own blog, but everyone should have a basic command of their most prominent platform. It's time we shifted the expectations for reporters from "not focused entirely on print" to "actually focused on digital."
Thanks to Elon, no asks if our children are tweeting anymore. There's a big advantage in learning how to use all your tools properly, even if it doesn't seem like it.
I broke my phone. Again.
It's not all that surprising, really. I've lost any number of phones to what I consider "normal use" — and what my father dubs "horrendous neglect" — like dropping it or getting it wet. And for the non-normal usage ... Can I really be blamed for a bus running over my phone?
(It was a flip phone; I was in college, I got off the bus with the phone flipped open, ready to text, whereupon it jumped [jumped! mind you] from my hands and flung itself under the bus. Likely out of envy of other, smarter phones, coupled with pity for me, stuck with it. You are missed, phone. Well, not so much missed. Vaguely remembered.)
This time, it again wasn't my fault, except for the part where it broke as a direct result of my actions. I dropped the phone on my bed (as per usual), whereupon it rebounded onto the floor and struck, screen-first, against the spines of a tall stack of particularly weighty hardback books. When I turned it on, it did not. Well, the buttons lit up, but the screen just flashed blue lightning at me from the visible cracks in the screen. I thought it best to shut the stupid thing down before I Force-lightninged myself.
So I went to Craigslist and eBay, and eventually found an older smartphone Amazon had on sale for about $75. This is actually why I tend to shy away from the newest, most expensive tech — I'm afraid I'll break it. The phone that fell under a bus was a flip phone back when flip phones weren't really in style anymore. The phone I broke a few days ago was a creaking Android phone I got for $100. It ran Gingerbread, for cryin' out loud — for you non-techies, it was about as powerful as an original iPhone.
(OBLIGATORY NOTE TO MY EMPLOYERS: Things that are not mine, in the sense that I did not pay for, I am much more respectful of. I have never thrown [nor even lightly dropped] the shiny things I am given to play work with.)
Am I just unusually careless with my things, the broken litany not even a quarter-listed in the previous paragraph? Anecdotal evidence from Facebook would suggest I am, but only just. Think of how many times you've seen something to the effect of, "lost/broke my phone, so text me your number and your name so I know who you are." I doubt most people go through phones quite as quickly as I do, but the churn rate is higher than the 2-year contract upgrade. Heck, even actual evidence suggests that 1/3rd of the populace has lost or damaged a phone, and 20 percent of the people reading this post have dropped one in the john (one being a phone; definitely don't want an unclear antecedent with that phrasing).
I can't find hard numbers on it, but I'd be willing to bet that more of those damaged phones are at the hands of the young, in this case meaning my generation and below. Those who are older tend to have a few things we young'uns don't: patience. Perspective. Oh, yeah, and a healthy fear of technology.
Maybe there's something to be said for the reverence with which most old people (here defined as anyone over the age of 35) treat their various gadgets, be they smartphones, tablets or even (shudder) feature phones.
Want to see it in action? Hand your mom an iPhone. I almost dare you. My mom rocked an Android for almost 8 months and got nothing but frustrations. When she finally caved into the peer pressure and bought the iProduct — despite having the aforementioned practice on a smartphone — I became the by-phone (my dad's phone) tech support for two weeks while she figured out things like dialing a number, texting one person at a time (which I was more than happy to help with, given the texts I was getting that were meant for other people) and even figuring out how to shut it off properly.
It's a truism at this point that a disconnect exists between so-called "digital natives" and the rest of the world (we'll call them "normal people," but only until we digital natives have a majority. Then WE can be normal [for once]), and I think it comes down to how technology is viewed.
Forgive the overarching generalizations below: They do not represent absolutely everyone in both cohorts, but I think they draw general outlines that most people match up with fairly well.
People who have seen new technology come into use view the technology only in terms of its functionality, a means to an end. Cellphones (and smartphones) are not their lifeline to life itself, they're a means of communication. Sure, they'll learn how to Facebook on the go, post Instagrams to Twitter and message their unruly teen to make sure he gets home before curfew, but if you took it away they'd still survive. They've got paper address books, landlines and actual (still digital, usually) cameras that aren't grafted onto a phone.
I think most younger folks (present company included) treat a phone more as an appendage. Losing it is a lot like amputation, in that we can survive the trauma, but recovery involves actually having to go back and completely relearn how to do things.
Imagine you had to go without a cellphone or a tablet for six months, with no prior warning. How would you communicate with friends? How would you find a restaurant? How would your friends know that "Certain ppl need to lern to respect there bffs and not go behind they're bak." Some people wouldn't even be able to do their jobs properly. (Journalists.)
Paradoxically, this overreliance on technology actually leads some of us treat it as a commodity. It's certainly true in my case. I don't really care what computer I'm using as long as it runs. I don't really care what operating system my phone runs as long as it has Angry Birds. From a physicality standpoint, this non-attachment means I'm probably more wanton in my care than I should be (hence my perpetual progression of buying new phones) but, judging by their Facebook statuses, more of my friends take after me than do resemble my parents.
I don't treat most gadgets like they're shiny objects I'm worried might get scuffed. I treat them like books: I'm not going to go out of my way to destroy them, but bending the pages back or throwing it (literally) on a pile or the floor is perfectly acceptable, because I don't really care if it gets beat up a little.
To me, technology is a tool, in that you use it to create other things — it just happens to quite often be a very expensive tool. But you're supposed to use tools. Screwdrivers are meant to drive screws ... to build a birdhouse. Paintbrushes are meant to brush paint ... to make the birdhouse attractive to birds. Similarly, smartphones are meant to phone smarts ... well, you know what i mean.
You're not supposed to take it easy on tools. You're supposed to use them hard, or at least as hard as you need to. And you just have to live with the fact that sometimes, even though you may be using it properly, a hammer will randomly have its head fly off and see the claw part embed itself in the wall about a foot to the right of your head (true story).
Though I bet a $100 hammer wouldn't. (grumble)
I remember being vehemently anti-smartphone and then, after I caved and bought an Android, anti-Apple. Now I'm pretty much anti-everything new, except I also want the fastest, prettiest devices. I'm basically the worst.
Hoo boy! As a [technology writer/reporter without a story idea/old person], I've seen my share of changes in life. But [new product] is about to completely alter [area in which new technology will have extremely slight impact].
I was at [public place] the other day when I saw a young person extricate [latest technological obsession] from her purse. Now, I don't disparage [Generation X or newer] their technological revolutions, but it seems to me that [outdated technology people don't use as much but is still prevalent] works just fine, for my purposes.
See, my generation, the [****any generation older than X, whose name invariably invokes a more positive connotation than more recent ones], we didn't need your fancy new [latest technological obsession] for [arduous chore made easier by modern advancements, but still possible to perform "the hard way"]. We were happy as [animals commonly presumed to be in a constant state of rapture] with [old technology] — it may have taken longer, but that was the way we liked it.
You see, with the [fancy new technology], people aren't able to [incidental advantage of old technology no one noticed/cared about until new technology]. Why, when we wanted to talk to one another, we just [verb for specific type of communication]-ed on our [technology two generations removed; old enough to be nostalgic about, but young enough to masquerade at least a passing interest in technological advancements].
[Obligatory reference to that goddamn Nicholas Carr article/book about about how the Internet is imploding our brains].
I don't see why young people today feel the need to live their lives so quickly, or expensively. Sometimes, you just need to take the time to [verb indicating the activation of one of the senses] the [pages/roses/other noun that often evokes nostalgia or pleasure]. That's why I refuse to buy [advanced technology]. I'm perfectly happy with [older technology that's itself a vast improvement over how things "used to be done"] — the way things used to be [until a newer version of the advanced technology comes out and I can bitch about that while upgrading to the previous generation without seeming hypocritical].
One day, when [generation too young to have a name yet] grows up, they won't remember the feel of [physical object being replaced by technology], or the joy of browsing [physical store replaced by Amazon, et. al] to spontaneously find [physical object]. Maybe it's just me, but I don't think being [verbified formation of name of new technology] necessarily means [pun-ish play on verbified name of thing being replaced by new technology].
See the inspiration for this guide here.