This weekend, msnbc.com launched a sweeping redesign of the most important part of their site: the story page. The result is something unlike anything any other major news site is offering and is a bold step in a direction no competitor has gone down (yet): the elimination of pageviews as a primary metric.
For many years, I’ve railed against tricks like pagination and “jump pages” as a means to goose pageviews. Honest people in the industry will tell you these are simply acceptable tricks to bump revenue a bit, while disingenuous or uninformed people will use “readability” as an excuse to make users click ten times to read ten parts of a single story. For this latest redesign, msnbc.com has decided to de-emphasize page views entirely and present stories in a manner that maximizes enjoyment and as a result, total time on site.
What do I mean by this?
Think of how a typical user session works on most news sites these days. A user loads an article (1 pageview), pops open a slideshow (1 pageview), flips through 30 slides of an HTML-based slideshow (30 pageviews). That’s 32 pageviews and a lot of extraneous downloading and page refreshing.
On new msnbc.com story pages, the above sequence would register one pageview: the initial one. The rest of the interactions occur within the page itself. Can msnbc.com serve ad impressions against in-page interactions? Sure, and that’s key to the strategy, but as a user, your experience is much smoother, and as an advertiser, the impressions you purchase are almost guaranteed to come across human eyes since your ads are only loaded upon user interaction.
This is the first time (to my knowledge) this sort of model has been deployed on a major media site with over a billion pageviews a month, and it has the potential to change the entire industry if it works. It’s also a big risk, as most advertisers are not used to thinking of inventory this way. We like big risks with big payoffs though and we feel that when you take care of the user and the advertiser at the same time, you’re probably onto something.
Ad model aside, there are also tons of other interesting things about the new msnbc.com story pages:
To be clear, the msnbc.com team is very proud of what’s been launched so far, but is under no illusions that things are perfect yet. Everyone involved in creating these new story pages is monitoring reaction closely and ready to modify anything that needs improvement. Since we have plenty of thoughtful design and development voices here on Mike Industries, I’d love to open this thread up for some reactions. What is working for you, and what, if anything, would you change? The team is listening.
Through much of the late 90s and early 00s, I remember having the same conversation over and over again about Apple and Microsoft. I had it with my friends, I had it with my colleagues, and I had it with anyone else who was interested in computers. It went something like this:
Other person: “When are you going to give up already and start using a PC? The war is over. Apple lost.”
Me: “They still make the best stuff and I want to support the company that makes the best stuff; not a company that uses their monopoly to sell products.”
Other person: “Don’t you think Apple would do the same thing if they were in charge?”
Me: “Yes. They’d probably be even more ruthless, but at least they’d make great products.”
From there, the conversation would tail off in another direction but I always remember thinking wishfully to myself that if Apple ever did rule the world again, what a fantastic problem it would be. Instead of having our future dictated to us by a company who didn’t even care enough to fix a broken web browser for over five years, we’d have our future dictated to us by a company who produced the most wonderful products in the world. The dream seemed so far-fetched, however, that it was easy to miss the potential for nightmare in it.
Apple will probably finish this year a larger company than Microsoft, from a market capitalization perspective. That would mean the world values the sum of future cashflows into Apple more than any company in the United States besides Exxon-Mobil. God forbid the terrible BP oil disaster gets worse and has cascading effects on other oil companies, we could see Apple at #1.
So in a sense, we’ve now admitted — as investors at least — that Apple owns our wallets, many years into the future. This actually feels good right now, though, in a way. Not only am I using a great operating system, but lots of other people are too. Not only do I have a phone that keeps me connected, but I really enjoy using it too. Not only can I craft richly designed web experiences for geeks with good browsers but a good majority of people can finally view them too.
Most things are great so far. The reward we’ve reaped as a society for shoving greenbacks into Apple’s bank account for the last decade is that we have much better stuff now. It’s the exact opposite effect we got from making Microsoft big.
Those who are following the situation, however, have noticed a few things change recently, the most obvious being a move towards an incredibly closed operating system in iPhones and iPads. Many believe it’s only a matter of time before most of Apple’s products run on a similar OS. There are many definitions of “closed” vs. “open” but here is mine:
Steve Jobs wrote in his mostly reasonable letter condemning Flash that it was Adobe whose stuff was closed and Apple was the one using open technologies, but Adobe’s CEO — despite saying very little of substance — was right about one thing: this is a smokescreen. In order to use the Flash format, all I need to do is either buy a single copy of it (if the IDE is useful to me), or use any number of other, free compilers out there. In other words, Adobe never even needs to know about me and never needs to approve what I’m doing or selling.
In order to get my stuff onto an iPad or iPhone, however, I must receive explicit approval by a human being working for Apple after this human being has manually reviewed my work, derived my intentions for the product, and made a value judgement on what my creation brings to the device. As long as that process exists, there shall be no arguments that the iPhone or iPad are more open than just about anything we’ve ever seen before… including Flash. To claim that because Apple is pushing open standards like HTML5 (really for their own benefit) means they are somehow more open than Adobe is folly.
Adobe’s problem in this mess is that they’ve painted themselves into a corner with the public. They used to be loved by everyone who used their products. Ask a designer ten years ago whether they’d rather switch away from Apple or switch away from Adobe and I’m sure most would have stuck with Adobe. Today, not only has the situation reversed itself, but I find myself actively trying to move away from Adobe on my own. They’ve shipped nothing but bloatware for the past five years, each version of CS being slower and buggier than the previous and offering very little important utility in return. $700-$1000 for Photoshop CS5 and it still can’t even print a tiled document. Adobe Creative Suite, in many ways, has become the Microsoft Office for the creative design and development industry. Somehow I bet that was a company goal in a presentation at some point. Mission accomplished. So when Apple stiffarms Adobe by changing section 3.3.1 of their iPhone OS developer agreement, it’s no wonder people aren’t exactly rushing to Adobe’s defense.
Flash has taken a slightly different path towards public distaste and I actually don’t blame Adobe for most of it. When Flash first came out, only the most talented design visionaries used it. When a new Flash site came out in 1999, each one was like a new DaVinci… beautiful works of art that moved the web from a tame, ugly typographically poor medium to a center stage for creativity.
Then the advertisers got ahold of it.
When most people speak ill of Flash, they are actually speaking ill of ads. Watching Flash video on YouTube doesn’t crash your browser; visiting a news site with five annoying Flash ads all trying to synchronize with each other does.
What most of these people don’t realize, though, is that it’s other “open” technologies that play a part in making this happen and will continue to, long after Flash is history. The OBJECT
tag which spawns Flash movies is an open standard. The javascript that popped open that window with the screaming Flash ad is an open standard. And the HTML/CSS that slowly sashayed that 300×250 div
right the fuck over that paragraph you were trying to read is an open standard too.
When Flash is gone, this overly aggressive marketing will simply be foisted upon you using more “open” technologies like HTML5. And guess what? It’ll be harder to block because it looks more like content than Flash does.
It also amuses me when people talk about two things in particular with regard to the iPhone and iPad. First, how much better some companies’ iPhone apps are than their web sites, as if the company is somehow so much more gifted at creating iPhone apps than web pages. It feels better because it’s designed for you to do things quickly. Most web sites are actually not designed for speed of task completion at all. They are designed to maximize page views or at the very least, time on site (and hence, maximize revenue). ESPN.com doesn’t want you reading one story about the Mayweather/Mosley fight and then moving on with your day. They want you to read ten more stories after that, check your fantasy teams, and buy a Seahawks jersey. Mobile.espn.com, on the other hand, is more concerned with getting you in and out quickly because they know you have less tolerance for distraction and extraneous clicks when you’re on your phone. The second thing is when people talk about how great content looks in some of these iPad apps. Again, this is a reaction to the lack of distraction, not the tablet form factor.
Content that is free of distractions and potential crashes looks and feels better. Period. It’s not the hardware; it’s the environment.
… which brings us back to Apple and their role in the way we experience information moving forward.
With the iPhone and the iPad, Apple has either smartly or stupidly drawn a line in the sand and declared themselves no longer just the arbiters of hardware and system UI but arbiters of content and commerce as well. If you want to develop or produce content for Apple’s ecosystem, you will do exactly as Apple tells you to do. If you want to enjoy Apple’s products as a consumer, you’ll enjoy every freedom Apple provides and live with every limitation they impose. It’s like a country club. Apple isn’t saying you can’t play golf with your pit-stained t-shirt and denim cutoffs. They’re just saying you can’t do it at their club. Apple wants to run the most profitable country club in the world, with millions of members, but they don’t want everybody; and therein lies the difference between how their resurgence is playing out and how Microsoft’s dominance ultimately played out.
Microsoft wanted 100% share in every market they entered. The thought was that once you dominate a market, you can impose your will on it via pricing, distribution, bundling, and all sorts of other methods designed to maximize profit. To Microsoft in the 1980s, a monopoly was a great problem to aspire to have, and since antitrust laws weren’t routinely applied to software companies, the threat seemed immaterial. The problem with this thinking, however, was that the law eventually caught up to them and crippled their ability to continue operating as a monopoly.
Apple, on the other hand — while in danger of eventually suffering the same fate — seems determined to avoid it. What’s the best way to avoid becoming a monopoly? Make sure you never get close to 100% market share. What’s the best way to temper your market share? Keep prices a bit higher than you could. Keep supply a bit lower than you could. Keep investing in high margin differentiation and not low margin ubiquity. Remember how Microsoft invested $150 million in Apple in 1997 in order to keep them around as a plausible “OS alternative” in hopes of avoiding the antitrust knife? Well Apple already has that in Android, in Blackberry, in Windows Mobile, in Palm, and in Nokia. They are fighting hard right now to make sure they are one of the two or three that will continue to be relevant in 5-10 years, but their goal is clearly not to be at 100% or even 90%. That level of success would get the company trustbusted.
It is this prescient and necessarily restrained motivation that reveals the true reason why Apple has closed up tighter over the last few years: it’s not to take control of the world. It’s specifically to separate themselves from a pack of companies they need as their competitors but want relegated to the lower margin areas of the market. Apple will stay closed as long as being closed is a net positive to their business. Until people either start abandoning their products because of this or the do the opposite and adopt their products at a rate which creates a monopoly, they will continue operating at their current clip: high innovation, high profits, and high control.
It’s scary to people because they remember the harm other companies have done when they reached monopoly status, but with Google, Microsoft, Nokia, RIMM, and now HP all keeping the market healthy with different alternatives, there is no excuse for not voting with your feet if you’re unhappy. Apple’s not going to take over the world because — if for no other reason — the laws of the United States won’t let them. If you don’t want to contribute to their success because their behavior is distasteful to you, then don’t; but don’t forget how fortunate we are to have such a ruthlessly innovative company at the helm of the ship at this point in time. Either get on it or just pick another boat and draft in its wake. When the biggest problem in personal technology is that the leading company is getting a little too exceptional, it’s a good problem to have.
I have been pitching this concept for at least seven years now to anyone who would listen: a train that never stops but instead uses accelerating and decelerating pods to shuttle passengers on and off at each approaching stop.
Normally, when I’ve drawn this out for people, the concept has been met with a reaction along the lines of “uhhh, good luck with that one!” The only difference in what I’ve been pitching and the concept in the video below (via kottke), is that my model uses individual pods instead of one big “group pod” in order to let people off and on at even more desirable locations, but it’s great to see someone finally put something like this into a video animation:
If I’ve thought of it, and now these people in Asia have thought of it, countless others have probably thought of it as well. Now it’s just time to make it happen. A great train ride is the most enjoyable way to travel, in my opinion.
I finally put in my pre-order for SimpleScott’s Designing Obama book a few minutes ago. I wanted to buy it earlier but never overcame the inertia until I got a chance to have beers with Scott and then listen to him speak at the excellent Webstock conference in New Zealand last week (by the way, thanks to Khoi Vinh for asking me to step in for him as a speaker). Can I also just say that Webstock is the best designed conference I’ve ever seen?
Scott’s a great designer, obviously, but hearing about the care that’s going into just the production of the book is going to make this piece of art a must-have. I may even order two and keep one suspended in formaldehyde.
While ordering the book, one part of the process stuck out to me as something I’d never seen before, even having ordered probably a thousand items online in the past: when I typed in my credit card number, a green checkmark showed up immediately after the last digit was entered. My immediate suspicion was that they were counting digits and gave me a check to indicate I had typed in enough of them, but again, having never seen that before, my interest was piqued. I tried deleting the last digit and replacing it with a 1, then a 2, then a 3, and so on. Only when I typed the actual digit from the credit card did I get the green checkmark again.
Further investigation revealed that no server calls were being made, which means this was some sort client-side algorithm that verified credit card patterns. Iiiiiiiiiinteresting!. Even more investigation revealed that this was the work of something I’d never heard of: The Luhn Algorithm.
The Luhn Algorithm is a formula which can be run in javascript, PHP, and most other programming languages that uses some mathematical rules to determine if a credit card number is likely to be valid. Apparently, credit card companies issue numbers according to this algorithm, and if a number doesn’t fit it, it’s definitely not valid. Before you say to yourself “wow, that’s some neat, new technology I can use!”, note that the Luhn Algorithm has been around since 1954!
Although using this algorithm in your own projects is clearly not a necessity, I see a couple of potential advantages and a couple of potential disadvantages:
I’m curious to see if this catches on as a trend.
I’ve been trying to square my lack of enthusiasm about the iPad with the seemingly very positive analyses from those smarter than me.
After a few days, I think I finally reconciled it with a simple realization: the only reason I’m not enthusiastic about the iPad as a consumer is that it simply falls below my value curve at this point in time. Consider the graph below:
When the iPhone came out, I would have paid $1000 for it. I still would, to be honest. I wouldn’t exactly be happy about it, but I’d do it. It provides so much utility to me, it’s become such an indispensable part of my life, and it has no perfect substitutes, so its price elasticity to me is extremely low. Apple can charge pretty much whatever it wants and I will buy exactly one iPhone.
When the iPad was announced, however, the value curve was very different for me. It is currently a device I’d pay about $199 for. Not $499-$829. That is not to denigrate it at all. It just means its current value to me is below its current price. I don’t read eBooks, I have a laptop for my mobile computing needs, and I don’t have a place in my workflow for this device at this point in time.
The key is what happens over time, however.
The first effect is a pricing effect. As the price of both devices inevitably decreases, the value equation begins to change. A $10,000 iPad sells maybe 1000 units. A $1000 iPad sells maybe a million units. A $100 iPad sells 50 million units. And a $10 iPad sells about 500 million units.
So then, “liking” the iPad is really just a question of “what price would you pay for it?” For me, it’s about $199 right now. Electronic toy price, in other words. For others it may be a lot higher, and still others, lower.
The second effect is a utility effect. The utility of an iPhone is very high right now. It already plugs into existing cellular and wifi networks, it fits in your pocket, it replaces multiple devices, and it has few competitors. What happens when it’s not the only horse in the race though? We’re already starting to see stiff competition from Google with the Nexus One and Nokia undoubtedly wants to play this game too. It’s unclear whether any competitors will succeed making a better smartphone than Apple, but they will certainly create viable substitutes, thus reducing the unique utility of the device.
Look at what happens (possibly) with the iPad though. You can just sense by looking at it that it’s a bit “early”. There isn’t enough to do with it yet. The New York Times app looks nice and all, but it’s a far cry from a world of widely available, richly laid out e-publications (I personally question, however, if we even need this sort of world). You also can’t use the iPad for home automation stuff yet (although my buddy Danny will be working on it). You can’t beam Hulu from it to your TV. You can’t video conference with it. You can’t control it with voice commands. You can’t run it for a week on a single charge. These are all things I think we’ll see in the next several years, and thus it may become a more valuable device as time goes on.
When either the price is lowered to my value threshold, or my value threshold rises due to increased utility, that is when a purchase will be made. Perhaps even multiple purchases.
There is little doubt in my mind — upon finally thinking this through from a dispassionately microeconomic standpoint — that at least one of these two things will happen; and that is why Apple wins in the end, despite our best attempts to be curmudgeonly about it.
I normally stay out of the fray when somebody in our industry does something stupid — because it happens so often — but what Jason Calacanis did to his readers on Twitter last night and this morning is as clear an example of pomposity and disrespect as you’ll ever find:
Jason, with a good-sized Twitter following of over 90,000, began sending out tweets with details about Apple’s new tablet before it was officially announced this morning. He claimed to have been given one by Apple, for press purposes, and began reeling off details in separate tweets, such as:
You get the picture.
Several media outlets including TechCrunch, the Wall Street Journal, and thousands of individuals picked up Jason’s tweets and that’s how I found out about them (I don’t follow Jason). Upon inspecting the tweets, I immediately knew how this was going to end: badly. As someone who’s followed Apple closely for most of my life and also someone who doesn’t really give Jason Calacanis credit for much of anything besides incessantly promoting himself, I knew Apple would never give a guy like that a device in advance under any circumstances, for any reason.
Sadly, and predictably, however, Jason was able to fool thousands of others. He’ll be the first to try and convince you his tweets were too absurd to be construed by any reasonable person as true, but we’re not just talking about country bumpkins who were duped here. Look no further than Robert Scoble’s first comment in the comment thread on CrunchGear (or any of his comments on Twitter). He doesn’t appear to think it’s a silly joke upon first read. Neither did Neil McIntosh at the Wall Street Journal. And neither did many thousands of Jason’s “followers” throughout the world.
Let me see if I can make this as clear as possible:
Never dupe your readers.
Never dupe your readers.
For someone who seems so dead set on being a lot more influential than he actually is, it’s the height of irony that Jason would do something like this. The fact that it occurred only on Twitter and was a lot more believable than it could have been if it were really just an altruistic joke really tells us all we need to know about the motivations here. It went something like this:
Well, mission accomplished, I suppose.
This sort of thing makes me shake my head because I’ve seen it before and it just never turns out well… and it’s never forgotten. I remember a few years ago in our little corner of the tech industry — web design and development — two reasonably well known colleagues started a high-profile fight on their blogs, each accusing the other of “borrowing” various design elements and outright creative theft at times. It went on for a few blog posts and some of us began taking sides in the comment threads, trying to defend the good names of our friends. After a day or two, both people revealed that the whole thing was not real and meant to “illustrate a lesson” about creative license. As you can imagine, we were all pretty livid. Not even necessarily because it was a waste of our time or anything, but because we had been purposely duped by people we trust. It didn’t matter that the intentions were not evil. Nobody likes to be duped.
Which brings us back to our story about Jason and the ruse he pulled on his followers. I’ve felt this way for a few years now, but there are many people in our industry who think they are a lot more important than they really are. Some examples that come to mind are:
If you want to be influential, lead by doing, not by talking, and certainly not by duping. If what you create is really good, other people will talk about it for you.
It’s perfectly ok to talk about your own product and do some promotion when appropriate, but what it’s never ok to do is dupe your readers. Don’t make the same mistake yourself. If you want respect, be respectful first.
I just opened up our first ever dedicated interactive design position this week. If you’re just a little bit crazy, you might be perfect for it.
The official way to apply is by sending an email to msnbcjobs@msnbc.com (which you should do if you’re interested), but if you’re a Mike Industries patron, feel free to contact me as well.
Well, it’s January, and as has become commonplace over the last several years, the public is abuzz with anticipation over a new Apple device. This time it’s a tablet.
I think the single most interesting thing about this unannounced tablet is how pumped everyone is about it, despite its lack of obvious value proposition. When we get new Mac models, we get lighter, faster, and prettier machines. When we got the iPod, we got a whole new paradigm for consuming music. And of course, when we got the iPhone, we got the ability to replace multiple devices with a single, all-in-one device that did everything much, much better.
With this tablet thing, however, I feel like I’m much more skeptical than the press, the fanboys, and everyone else who thinks it’s such a slam dunk to change the world. It’s like the greatness of the iPhone has everyone thinking Apple is somehow going to top that level of revolution with each new market they enter. There has always been a magical quality to the company’s development and introduction of products under Steve Jobs, but I wonder if expectations are a bit too high at this particular point in time.
In my opinion, even if the Apple tablet succeeds, I can’t see how it will have nearly as much impact as the iPhone, the iPod, or the Mac; and if it fails, it will be end-of-lifed or morphed into something else within a few years. I don’t think it will replace the laptop and I don’t think it will totally re-invent anything we currently do on our computers. Whereas the multi-touch interface enabled us to do things we’d never dreamed of doing on pocket devices before, I’m not sure it will do the same for bigger screens.
This, from a guy who sleeps in rose-colored Apple-shaped glasses.
In trying to square my lack of enthusiasm with what I’ve been reading about this thing, I keep coming back to the question: what’s it for?
First of all, I think this device is almost entirely for consumption, and not production. It will be borderline unusable for writing essays, designing posters, making movies, and even sending emails. When you want to produce something, you will not do it with this tablet.
With consumption and severely limited production as the premise, what sorts of things could you do with this device? I see four possibilities that could be construed as compelling:
This is probably the only thing on the list that would singlehandedly cause me to purchase an Apple tablet. I haven’t heard anyone talk about it, but this is how it would go: the tablet comes with a dongle that can connect via RCA/component/HDMI to any television. The tablet communicates wirelessly with the dongle to both send video to it via 801.11N (or whatever shiny, new, faster wireless interface is next) and also to control the TV watching experience. In this scenario, you could use it to relay things like live Hulu streams to your TV or display stored video you bought from iTunes or “borrowed” from somewhere else.
There is also a chance this could be done in concert with Apple TV instead of a dongle, but the clear problem it solves for me is “how can I easily display on television the video that is currently playing on my computer?” Right now, the answer to that is to carry my laptop over to my TV, plug it into an extra input, pop the video player full screen (if I even can), and then walk back over to the laptop every time I need to control something. It’s the critical link that is keeping Hulu and similar services from being a much bigger part of my life.
My feeling is that Apple TV has never done as well as Apple hoped, but also that it is not something the company is going to give up on anytime soon. Part of me wonders if the tablet, among other things, is just a much better form to stuff Apple TV functionality into. If it is, I’m probably in.
Almost everyone who has a Kindle loves the hell out of it. I probably would have bought one awhile ago, but I just don’t read enough books to justify it. Aaron Swartz, on the other hand, with his 132 book per year reading pace, could probably justify owning three (sidenote: WTF Aaron!) (sidenote #2, WTFFFFF JOE!!!). If the Apple tablet did e-books plus a few other things in this list, however, I might be a buyer.
To me, the biggest clue that Steve Jobs cares about this market is that he says he doesn’t. Jobs famously said a few years ago, in response to a question about entering the e-book reader market:
“It doesn’t matter how good or bad the product is, the fact is that people don’t read anymore.”
Not only is that statement preposterous, but it flies in the face of the positioning Apple tries to bestow on its products: that they are for intelligent consumers. Guess what is strongly associated with intelligence? Reading. Particularly books. What Jobs really meant by his statement was:
“People are reading fewer and fewer books because they are less convenient than other types of media.”
The first statement is terse, dismissive, and meant to throw the press off Apple’s scent. The second statement is what you will probably hear at the launch event.
Another clear clue that e-publication reading is a large part of the Apple tablet is the flub by Bill Keller of the New York Times a few months ago. Keller’s unauthorized reference to the tablet all but guarantees they have a deal with Apple to display New York Times content on this device. It could be something very simple and uncompelling like a Times Reader app that is offered for free, but what if it’s something more substantial like the New York Times actually subsidizing the tablet if you sign up for a two year subscription to the e-NYT? I’m actually less interested in what the New York Times (and other) content looks like on the tablet and more intrigued by what the economics behind this sort of content delivery look like.
Another question I have about this tablet — if it’s going to compete with the Kindle — is what its equivalent of E Ink is. The Kindle enjoys a whopping one week battery life largely because it doesn’t require a backlight to operate. Currently, all of Apple’s screens are backlit, and unless the company has an answer to that, it may have problems competing head-to-head with the Kindle on pure e-book reading. Or has Apple invented a way to overlay an E Ink screen on the same surface as an LCD screen? That would be ridiculously awesome.
There aren’t a whole lot of really great solutions out there for watching video on the go. An iPhone is too small for most people, while a laptop is probably overkill. A tablet with 15-20 hours of battery life and the ability to stand up like an easel might fit the bill perfectly for viewing on a bus, on a plane, in a car, or elsewhere on the go.
I don’t think this benefit alone would sell a lot of tablets, but it would help justify a purchase for some people.
I’ve never been into video chat as I find it extremely awkward, but I understand it’s big in the grandparents’ set and every other set where people are potentially far away from loved ones. While I mentioned above that I don’t expect a lot of content production to be done on the tablet, live video capture and broadcast could be a notable exception because it requires you to do nothing but look into the tablet and speak.
A lot of my skepticism around tablet computing stems from my belief that the form factor just isn’t as beneficial as it seems. Besides when sitting in a cramped airline seat, I don’t recall many situations in which I wished the bottom half of my laptop would disappear. When I have, it’s always been for high-volume consumption: long form video and long form text. In other words, things that don’t require me to do much of anything besides staring at the screen. Does a market exist for a device that does just these things and not much else? I think the Kindle has proved that at the right price point, the answer is yes. I guess I just don’t consider that as world-changing of a product as other people do. I guess we won’t know until we see it though, right?
As far as actual form-factor goes, I expect something significantly more klutz-proof than the iPhone. My guess is an all-aluminum body with an aluminum panel that covers the device’s screen when closed and folds open to double as an easel when you’re using the device on a flat surface. I expect a solid-state drive as the only storage option but would like to see an SD-card slot as well. 801.22N (or better) wireless is a given, but if this thing has 3G/4G connectivity, it’s not going to be through AT&T. If I had to bet one way or another, I would be on wifi only. If this device is successful, it’s another bargaining chip for Apple when it renews iPhone negotiations with carriers, and I don’t think this sort of connectivity would sell many more units right now.
So anyway, that’s all I have for now. I expect a device that will sell a decent amount of units but fall short of the world-changing expectations placed upon it by people who think Apple will never release another product that doesn’t top its previous one.
Lately I’ve been intrigued by situations in which the amount of effort required to complete a task is not overwhelming but it is enough to prevent the task from getting done. The latest example, from a couple of weeks ago, was wine journaling. Sure it only takes a few minutes to pull out a laptop, log into your wine-dot-whatever account and structure a proper review, but unless a few minutes becomes a few seconds, I’m out… and so are thousands of other people.
Minertia is what I might call it… short for a “minimal level of inertia”.
Many companies have succeeded primarily because their products overcome minertia. Twitter is a good example of this. There were millions of people with (purportedly entertaining) thoughts, but none of these thoughts were worth spending more than 30 seconds to publish. Twitter provided a way to turn these idle thoughts into legitimate published communication with 30 seconds of effort, and BAM, they are the hottest company on the internet.
On to more pedestrian matters though: recording stuff on TV.
I’ll use Tivo as an example because that’s what I have, but this could apply to any DVR, Apple TV, Boxee, etc etc:
Here is how I decide to add a show to the repertoire of things my Tivo records automatically:
As you can see, this sometimes equates to several minutes of work (I’ve spent over 15 minutes trying to do this on my iPhone). Again, we’re not talking about a huge time investment here, but it’s enough to require steps 1-3 whereas with a little minertia reduction, people might be willing to record shows the first time they hear about them.
What got me thinking about this was an interview with Rex I read yesterday. In it, he mentions Modern Family as the best show on TV right now (I say it’s Dexter or Million Dollar Listing, but whatever). Thankfully, Rex’s interview was about the third time I’d heard this so I bucked up and did step 4. But here’s how much easier it could be:
The effort would thusly be reduced to under 10 seconds.
As with the wine example, I fully expect someone to leave a comment pointing me to something that “kinda sorta” does this, but not in as optimal of a manner as I described above. Anybody know of something that does this? Or better yet, anyone work at Tivo and want to build this? :)
Don’t let the beautiful bottle fool you… this is terrible wine.
If you’re an iPhone developer, you probably struggle a lot with the issue of effort vs. revenue. In other words, you think you’ve thought of something cool and you don’t mind investing the time to produce it, but you just aren’t sure if anyone will actually pay for it.
Here’s an app that — if well done — I would pay $20 or more for:
Whenever I’m having a glass of wine, allow me to snap a picture of the bottle (or the barcode from the bottle) and within 30 seconds enter some very basic information about it:
Once I hit submit, save this to my wine library database, accessible via iPhone or web browser.
Are there other wine rating apps and services available right now? Definitely. But unfortunately none of them pass the 30 second test. They don’t even pass the 5 minute test. Usually when you’re in the middle of drinking wine — whether at a wedding, a party, at dinner, or in a dark alley — spending 5 minutes typing notes into your iPhone is just not something you’d ever consider doing… and this is the critical void that no one has filled yet.
It should be “snap, select, select, done”. By reducing the effort required to create a personal wine note library to this simple 30 second routine, you’d be enabling thousands of recreational wine drinkers to do something they’ve never been able to do before: actually remember what wines they try and which ones they like. That level of detail, in most cases, is all people really need, and it’s something I am 100% sure many would gladly pay for.
Ok then, who’s going to step up? I’ll be your first sale.
... or use RSS