Archive for the ‘Musings’ Category

What Should a PlayStation Vita Sequel Look Like?

Many people would say that Sony should abandon portable gaming altogether, especially given how much Nintendo has dominated this arena in recent years, and, more importantly, how mobile gaming now dominates both companies’ offerings altogether.  As I’ve waxed on before, I prefer portable to mobile (i.e. handheld console to smartphone), in large part because of the physical controls.  While the analog sticks, face buttons, d-pad, and shoulder buttons are all small on Vita, I will take these any day over using a touchscreen only.  As such, I want Sony to continue working in this admittedly dwindling market; here are my armchair ideas:


While the OLED display on the original Vita is quite rich, Sony was right to switch to an LCD panel in the revised and slimmed down Vita PCH-2000.  Two benefits arose: one, the unit was less expensive to manufacture; two, the battery life was noticeably improved.  A sequel to the Vita should just go ahead and retain this status quo (though a higher PPI and wide-angle IPS would be welcome.)  Size-wise, I would leave the dimensions alone.  In my pie-in-the-sky dreams, I would also like Sony to adopt a Gorilla Glass 3 panel, which will be scratch resistant and light, compared to the existing ones that use scratch-prone plastic.


I believe the best addition the Vita brought to market was a second analog stick, which was lacking on the PlayStation Portable series.  This allowed for true 3D gaming, which greatly paid off in many of Vita’s titles.  But while the handheld attempted to mimic a DualShock 3, it was lacking in a few areas: first, the face buttons could stand to be about 20% larger than they are now.  Same goes for the d-pad and analog sticks.  There is way more bezel on the Vita than there needs to be, so there is room to invade those areas.  Second, the sequel needs to somehow account for L2 and R2, as well as L3 and R3 (i.e. triggers to go with the shoulder buttons, and clickable analog sticks).  As it stands today, those missing input methods are simulated with various touchscreen or rear touchpad taps. Again, we want physical controls.

Additionally, let’s mimic the DualShock 4 and offer Share and Option buttons, instead of Select and Start.  As Vita has today, we’ll retain Sixaxis support as well.


I really can’t justify the inclusion of a rear camera.  Who is using his or her Vita to take snapshots?  I would think the Venn Diagram of Vita owners who also own smartphones overlaps quite a bit.  As for the front camera, I can see some utility in treating it like a PlayStation 4 Camera, though it’s field of view is much tighter than its home-console counterpart.  I would consider that front one optional but probably unnecessary: we don’t need to be video conferencing on this device.  It’s for games.  Dropping these will save a fair amount of cost.

Touch Input

Keeping the touchscreen makes a lot of sense for menu navigation.  Also, it’s a good way to simulate the DualShock 4’s touchpad, so I say leave it in.  The reach touchpad, on the other hand, is useless.  Lose it and save the money.  Doing so will reduce power consumption and save space.  Further, not having this input method will reduce spurious inputs to boot.


This is a big one: something Sony should do is use ARM processors in the next Vita, instead of proprietary ones.  Sony realized the value in switching to an industry-standard x86-64 Intel-compatible processor in the PS4; it should do the same in its handheld by embracing the industry standard in mobile computing.  ARM processors are incredibly powerful now, and better still, they’re incredibly power efficient.  Moreover, adopting such a ubiquitous architecture will simplify development and encourage mobile developers to consider porting their games to this new handheld.


This console needs to adopt at least 802.11n (as should the PS4, by the way), but 802.11ac would be even better, obviously.  High-speed, reliable WiFI will help make it an even better Remote Play device than its forebear.  Secondly, let’s not even bother with cellular networking.  The extra cost from a manufacturing standpoint, as well as to the consumer, is not worth it for the limited functionality it offered in the 3G Vita.  Most people have hotspot access on their smartphone plans now anyway.


Sony’s decision to employ a proprietary memory card in the Vita is simultaneously understandable from a business angle and deeply anti-consumer from a customer’s.  Worse still, these cards were shockingly expensive for their limited storage: the 32GB capacity was a staggering $100, while its SDXC card equivalent was less than $30.  Today, a 64GB Vita card has taken that $100 position, but an SDXC card of that size is $25.  Absurd difference.

Either use SDXC with an industry-standard encryption (perhaps AES 128 or 256) to prevent save-file manipulation and piracy, or only use internal memory and obfuscate the whole thing.  If Sony were to choose the latter, however, this console could not have less than 64GB; ideally, it would be 128GB.  That could get expensive.  I saw Sony should simply employ the SDXC cards.

Game Distribution

The use of game cartridges is woefully dated; let’s drop that altogether and save money on manufacturing and retail distribution.  Further, leaving that out reduces the cost of this console further and frees up more space.  Instead, all games will be digital only, as it is done on mobile now.  Will this alienate people who don’t have reliable access to broadband or maybe have data caps?  Sure.  This would be a good way to loop in retail partners by having “download kiosks” where customers can hook their new Vita sequels to and download them directly.

PlayStation Now

Time will tell if Sony’s investment into Gaikai will pay off in the form of PlayStation Now, but the technology itself appears sound to me.  But to make this Vita sequel a good citizen of this service, the hardware ideas from above will need to be there: one-to-one DualShock mapping is necessary.  Do that, and this console could potentially rule the roost with a subscription service that will hopefully open up to most PS1, PS2, and PS3 games.  Such a library would be incomparable.


Heretofore, I’ve been calling this proposed handheld the Vita sequel.  I actually don’t think it should use that name at all, since the PlayStation Vita has an underserved reputation as an overpriced, under-supported platform (well, kind of).  Instead, let’s embrace the highly successful PlayStation Portable brand and name this next console PSP4 (to align it with the PS4 product cycle).1


This console must, must, must come in under $200, even if that’s $199.99.  Eliminating the cameras and rear touchpad, committing to LCD, dropping the game cartridge slot, and slimming the whole unit down due to the afforded extra space, should go a long way.  From a development standpoint, leaning on well-understood technology like ARM processors, as well avoiding superfluous application development (like an email client, calendar, and other quasi-smartphone apps) should help, too.

Final Thoughts

The console should look and feel like a premium device.  I would adopt the two-tone and matte-like finish of the DualShock 4, reserving the piano-black to a small bezel around the display.  Let’s round the edges like the Vita slim to try to make this thing as comfortable as can be.  (It should go without saying, but I’m going to do so anyway: the PSP4 should employ a common connector, like micro-USB, just as the Vita slim does.  Let’s support 2.4A fast charging at the same time.)

This should be thought of as a hardcore device, but if it’s priced in a competitive way and has strong support for retro-games, I believe it could do well.  It may never sell 100 million units, but if it somehow reached, say, 25 million, that would be worth the effort.

Most PlayStation Vita owners love their handhelds: the game-attachment rate is especially high: last I heard, it was over a dozen gamers on average.  (This exceeds every home console and competing portables.)  Indie developers have realized the value in tapping into so loyal a fanbase, and those companies have flourished there.  My vision for PSP4 should continue to attract those vital creators but also (hopefully) entice interest from developers who have expansive back catalogs, or ones that are interested in mobile development already.

  1.  Sony could justify it as the fourth PSP since it’s the successor to the PSP-1000/2000/3000, PSP Go, and PSP-E1000.  (I know that’s pushing it logically, since all of these are really the same handheld, but compare this to Microsoft’s numbering for Windows, which has not made sense in a long time.  It is what it is.)

Written by Michael

21 September 2015 at 8:00 pm

Ads are the New Malware

Until recently, I’ve tolerated web ads because I understood that nothing is really free on the Internet.  For as magical as the web seems, there are thousands of miles of fiber running underground and through the ocean, centers full of power-consuming server clusters, and scores of people whose job it is to keep all this running.  (And that’s a grossly oversimplified summary.)  This is all before we even get to the web designers and content producers out there who build everything on top of this infrastructure.

In other words, I am sympathetic to the very real costs the Internet represents, and how many creators rely on subsidizing their content through sponsorships.

What is unacceptable, however, is that these ads have become increasingly more obnoxious and intrusive, and not just in the creepy way they track you so the advertisers can build demographic profiles — but, to add insult to injury, these ads are stealing more than just your screen real estate and assaulting your senses.  Smartphones are becoming the predominant way the web is consumed (cf. the United Kingdom), and these ads are now causing the web to load slower, stealing CPU cycles and drain you battery faster (tangibly), and even consuming an inordinate amount of data.  Worse still, they pose a serious security risk when malicious code is hidden in them, which is then disseminated across the Internet through ad networks.  See “Malvertising” if you want a good scare.

I am all for the idea of sponsorships (transparent, honest ones), but these privacy invasive eyesores are to this era what worms were to the early 2000s.  Ubiquitous, retarding, and dangerous.  Ad-blockers, which I used to be uncomfortable with, are this generation’s antivirus.

And just as it is with traditional malware1, this whole thing is turning into an arms race.  While one company might institute a Do Not Track preference for cookies, another might find a way to circumvent this.  Jim Gordon’s words about escalation from Batman Begins ring in my ears.

So, what can be done?  Perhaps we need stricter laws, maybe we should boycott sites and services that use these highly questionable tactics, whatever — but under no circumstances should we just accept it.  One bit of good news, in the meantime, is that iOS users will benefit from an important new feature in version 9, set to come out today: Apple has added Content Blockers to mobile Safari, meaning you’ll now be able to download ad-blocking extensions on iPhones and iPads.  I’m beta-testing Purify now, and I’m very pleased with its performance so far.  (I’m also using uBlock for desktop Safari.)  For the Android users out there, extensions of this nature have existed for awhile, but I think you’ll need to do some digging to really pull the tendrils of Google (and your manufacturer, and your carrier) from your phone’s heart.

If there are websites that you really believe in and want to support (especially if they employ tasteful and noninvasive advertising), then you can whitelist them.  That’s great.  But I would rather support sites I love by purchasing merchandise or services from them, rather than play this game we now find ourselves in.

As far as I’m concerned, the time for tolerating this is over.  We need to protect ourselves.

1. I’m comfortable comparing banner, splash, and pop-up ads to malware because they meet most of the criteria: “any software used to disrupt computer operation, gather sensitive information, or gain access to private computer systems”.

P.S.  I am keenly aware that WordPress employs ads in order to subsidize free publishing, like this blog uses.  I apologize for the hypocrisy.  If it counts for anything, I’m starting to think seriously about changing this situation.

Written by Michael

16 September 2015 at 10:00 am

Waste Not, Want Naught

I remember being less hot in Florida compared to Montana.  That sounds like crazy talk, I know, but as hot as summers got down south, I could always run inside to the A/C.  I never knew how much I took that for granted until I moved out here and discovered that the miracle of on-demand cold exists in almost no homes or apartments.  As the temperatures rise into the high eighties and low nineties this month, the only thing we can do is try to stave it off with fans.  As I write this, I actually have two fans pointed directly at me in such an endeavor.  Additionally, there is a double-bladed fan in the window to try to pull in as much of cool air as possible before the sun breaks through this cloud cover.

One of the issues that fans develop is dust accumulation, especially over extended use.  This is especially true of tower fans, which my brother and I have a few of.  But as much as I may like them, they’re nearly impossible to clean.

I had a very good Seville Classics oscillating tower fan that I used to rely on quite heavily, but recently, it started making an unbearable squeal and failed to push any air out all.  After leaving it sit for several weeks, I turned it back on to test only to discover that the situation seemed to have worsened, as now only the indicator light came on, but there wasn’t any internal movement.  By any reasonable estimation, this fan was dead.

So I placed it by my front door to remind myself to take it to the dumpster the next morning.  But just as I was pulling it outside that next day, I thought to myself: what if I could clean it out?  Maybe that would correct the problem?  Wasn’t it worth the try?

I put it back in my room and decided to dismantle it next chance I had.  I’ve never done that before, so the task seemed more daunting that it really was.  It took more than 25 screws and repeated viewings of a how-to YouTube video before I finally liberated the cylinder from it, but once I did, I discovered ghastly clumps of dust.  It looked like the inside of a bagless vacuum, no exaggeration.  Given that the cylinder was made of only plastic, I placed the cylinder into my bathtub and hosed it down.  As for the motor and oscillator, I took compressed air to it and found still more dust clumps.  Seriously, if you should ever attempt this, consider wearing a mask.

Even though it seemed like a long shot, I went through all that cleaning and reassembly.  Then I plugged it in and tested it: voila, works just like new.  In resurrecting this fan, I realized something important.  I was about to throw away what was probably a $100 appliance because it stopped working.  That sounds stupid when I say it like that — but isn’t that what we always do now?  We discard things when they’ve seemingly outlived their usefulness.  Everything is a commodity now, so it’s all disposable.  This seems to apply not only to appliances, but more expensive things, like computers and smartphones.  What are we doing?

We need a change in mindset.  Without getting political, I feel like my dollar has less and less spending power all the time.  Don’t you feel the same way?  In the grand scheme of things, $100 isn’t much, I admit, but when you consider how many little things like that add up in just a year, it’s extraordinary.  Indeed, we’re filling landfills with tons of things that could work again with just a modicum of effort.

I used to dismiss this whole idea on the grounds that my time was too valuable to spend on fighting with repairs.  Truth is, when you add up those needlessly large landfills and all the wasted money that would find far better use invested elsewhere, there’s one inescapable truth: none of us can afford to make any such claim again.  Make the time.

Written by Michael

9 July 2015 at 12:07 am

Posted in Musings

Tagged with , ,

The Terminator Franchise

I should first admit that I think that not only is Terminator 2: Judgment Day the best sequel of all time (yes, even edging out Star Wars: The Empire Strikes Back), it’s also one of the best movies of all time, period.  In addition to incredible action, effects, and pacing, the movie manages to deliver an important message about human nature and our propensity towards violence and self destruction.  The final line is so powerful that I’ll never forget it.  Sarah Connor narrates over an empty road during the night: “The unknown future rolls toward us.  I face it, for the first time, with a sense of hope.  Because if a machine, a Terminator, can learn the value of human life, maybe we can too.”

This film delivers on the promise from the first movie, when Kyle Reese repeats the words John Connor made him memorize to Sarah: “Thank you, Sarah, for your courage through the dark years.  I can’t help you with what you must soon face, except to say that the future is not set.  There is no fate but what we make for ourselves.  You must be stronger than you imagine you can be.  You must survive, or I will never exist.”  Indeed, the transformation of Sarah Connor from the first film to the sequel is staggering — she goes from a naive and gentle waitress in The Terminator to a hardened and violent soldier in Terminator 2: Judgment Day.  Linda Hamilton’s performances in both films are incomparable.  The best part is that they do indeed change the course of history that night at Cyberdyne Systems, one in which their Terminator stands as an unstoppable force against law enforcement in what can only be described as a preview of the apocalypse they’re trying to prevent.  Poignantly, no fate.

This second film is the perfect ending to the franchise.  There is zero need to go any further, but given the nature of how Hollywood works as a profit-driven enterprise, more sequels were inevitable.  As such, we were given what I consider one of the most offensive followups in cinematic history: Terminator 3: Rise of the Machines.  Not because the film was all that bad: truly, most of it was enjoyable, in the mindless action movie sense.  But because of the ending when it’s revealed that Judgment Day is inevitable after all, that everything that has come before was all for naught.  The central tenant of James Cameron’s two films was discarded all for the sake of generating more sequels — which he had no part of, I should point out.

Terminator: Salvation was a confusing mess, so much so that I barely remember it.  The post-Judgment Day world that is shown in this film is radically different from the portrayals in the James Cameron entries.  As such, I can’t even consider it canon (for whatever that means at this point), since the most recent Terminator: Genisys seems to ignore this as well.

Speaking of, I went to see Genisys yesterday, and just like Rise of the Machines, I felt like this was a completely enjoyable action flick.  Unlike Rise of the Machines, however, Genisys avoided throwing a giant middle finger at the No Fate thread from the original films, at least.  Nevertheless, this film’s overarching plot and how it fits in with the timeline is bizarre and borderline non-sensical.  Even so, I appreciate that this movie essentially establishes that it’s in its own parallel timeline, which at least affords it the possibility of taking the franchise into another direction without denigrating the Cameron ones.

Genisys shows us parts from The Terminator, but turned on its head because of the timeline changes.  Instead of the Sarah Connor who was blissfully naive at the beginning, Kyle Reese discovers one who is already trained and ready for the oncoming apocalypse.  But the strangeness of this version of Sarah is that, unlike Linda Hamilton’s Judgment Day incarnation, Emilia Clarke’s rendition is has much softer edges.  In a way, I’m fine with this, but on the other hand, I feel like the writers decided to make her more likable to modern audiences.  The tough plus sweet combination is a strange one for the Sarah Connor I know.

Similarly, Kyle Reese is quite different this time around, except that it makes less sense because he’s still supposed to be the same incarnation as Michael Biehn’s version.  And while Jai Courtney does fine work in his portrayal of this version of Reese, I feel strongly that his is considerably weaker.  Considerably safer.  What I mean by that is that Biehn played an emotionally shredded Kyle Reese who had seen nothing but nonstop horror and death in his life, and bore all kinds of scars both literally and figuratively.  This was a man with severe PTSD who feels wildly out of place when he travels to pre-Judgment Day 1984.  You can completely understand why such a disaffected person would fall in love with a photograph of Sarah Connor, this idyllic beauty that looked like she lived on another world.  The fantasy of her ran deeper for Kyle than we could comprehend.

Conversely, Jai Courtney’s Kyle Reese is shown to be more gallant, more emotionally stable.  He comes off like a more generic expression of what a solider should be, as is often the case in contemporary action movies, as opposed to what one can become after decades of agony.  The juxtaposition between Courtney’s Reese and Clarke’s Connor is more an awkward blind date than what Biehn and Hamilton had, which was more visceral and mutually dependent.

After thinking about this, I realized what my preference for a sequel would have been for the Cameron movies.  Rather than mess around with the idea that somehow the actions of Sarah, the Terminator, young John, and Miles accounted for nothing, I would rather have seen a film that follows up with the war-hero John Connor’s time after he sends back the two guardians to protect his mother and younger self.  Instead, I would go with the Back to the Future 2 idea of time travel with split timelines (alternate realities).  While Judgment Day was averted, the time that older John Connor lives in continues, its own world.

In this still post-Judgment Day world, I would portray the battle against the forces of Skynet as still continuing.  While the resistance managed to destroy its central core, all the machines it had created are still functioning, still obeying their directives.  Perhaps, like bees, the machines will designate a new queen, and a new AI, born without human design, to emerge and present a wholly new kind of threat.

Written by Michael

7 July 2015 at 1:54 pm

On Keyboards, the Writer’s Brush

Yes indeed, writers paint with words, and the keyboard is the brush upon a word-processor canvas.

The most writing I’ve ever accomplished (consistently) was on a domed membrane-switch keyboard manufactured by HP, when I was a student at Central Florida (2002-2006).  By keyboard-aficionado standards, that device wasn’t even all that great, but it was far better than anything I’ve had since.  So, what happened in those intervening years?

In 2006, I switched to a MacBook, which was the first notebook Apple produced that employed chiclet keys (over membrane switches).  Over time, I graduated from that original MacBook to a MacBook Pro, which sported a nearly identical keyboard, except that it was backlit.  And sometime after that, I traded computers with my brother, and I’ve been using this iMac ever since.  But once again, that keyboard was fashioned to be identical to Apple’s notebooks.  And from a build-quality standpoint, all of these keyboards were very nice.  In fact, I appreciated the aluminum construction of the Apple Wireless Keyboard I was using until recently.

But chiclet keys have very little travel to them, unlike larger keycap varieties.  That is to say, they don’t take much to depress, which sounds like it would be an advantage, but it’s not.  It encourages poor typing mechanics, has very little tactile response, and the result is poor accuracy, and thus poor speed. And while I felt plenty fast enough on that Apple Wireless Keyboard (or any of its predecessors), I found myself making mistakes all the time.  Since 2006! — some nine years.  Crazy as this might sound, this frustration led to a decreased desire to write (among other factors, admittedly).  Sadly, I didn’t put this together until I discovered a product called Das Keyboard.

The gold standard of key switches, at least to aficionados, is mechanical — like a throwback to another era.  Most keyboards these days are membrane-based, just like that HP keyboard and all the Apple-designed keyboards I’ve used.  (At least that HP one had good travel.)  But mechanical-switch keyboards ruled the day before the 2000s.

Thanks to the beauty of eBay, I picked up a Mac-layout Das Keyboard for less than half the cost of a new one.  Now, I know what you must be thinking: a second-hand keyboard?  That sounds disgusting.  And sure enough, you’re right.  Keyboards are filthy, filthy things.  So you can imagine that one of the first things I did after receiving my new friend here was use a keycap remover on it and wash every single key in warm, soapy water.  I then painstakingly cleaned all the channels around the switches with Qtips until it looked almost new.

I’m so happy I did this.  Not only does this keyboard look (and sound) great, it feels great.  I recommend any serious typist give a serous look at a mechanical keyboard.  This particular model of the Das Keyboard uses something called Cherry MX Blue switches, which are quite loud.  (But as I keep saying parenthetically, they sound great.)  But Das also produces a variant of this keyboard with Cherry MX Brown switches, which feel similar but have a muted noise to them; these are still noisy, to be sure, but at a lower pitch than the Blues.  If you co-habitate with another person, you might want to consider that alternative.

So just as a paintbrush is to an artist, an instrument to a musician, or a knife to a chef, a keyboard is a very specific and personal thing to a writer.  Yes, all these artists can use a different implement to accomplish similar things, but it’s very much like being in the wrong skin.  I literally do not think I could ever comfortably use a lesser-quality keyboard again and be happy.

Written by Michael

30 June 2015 at 12:44 am

Cellular Contracts vs. Installments

I haven’t dedicated much time on this blog to my day job, as it were, so I thought I might try to explain something that seems to confuse a lot of the customers I interact with at the AT&T Authorized Retailer store where I work.  That subject is AT&T Next (or Verizon Edge, or T-Mobile Jump, or Spring 1UP — whatever), which is where you buy a smartphone at full retail via installment payments.

At first blush, many people wonder why the hell they would want to buy a phone for, say, $649.99 instead of $199.99 on a traditional 2-year contract.  Well, there’s a dirty little secret regarding contracts.  You’re usually paying as much or more anyway, and they offer no flexibility whatsoever.

2-Year Contracts: A Primer

Contracts began as a way to entice American wireless customers to buy into what was once a product with very little use.  I say that because coverage was thin across most of the U.S. for many years.  Now, let’s set the record straight: no phone, even crummy basic flip phones, have ever been so cheap to manufacture that a carrier could give it away.  Even back in the analog days (1G!), that was high technology for the time.  But the carriers understood that few people would be willing to spend hundreds of dollars on devices that only worked in some places.  So those carriers dangled free or cheap phones on contract but priced the rate plans high enough to make that subsidized money back.  Truth be told, there was always a misnomer about how carriers wanted to get you to get new equipment so they could lock you in.  The best customer was the one who didn’t need a subsidy for a new phone but kept paying a rate like he or she had one anyway.  The only time a carrier would jump at putting you into a contract with no reservations is back when it was okay to extend a contract just for making a plan change.  (Thankfully, those days are mostly gone.)

An Example of a 2-Year Contract

Rate plans across the industry have been a state of flux recently, but before shared data plans came about, plans were quite stable.  Here’s an example of a common plan for a single line I would have sold back then:

Nation 450: four-hundred and fifty anytime minutes, 5000 night and weekend minutes – $39.99

Messaging Unlimited: unlimited messaging – $19.99

DataPro 3GB: three gigabytes of data – $30.00

Total – $89.99/mo + tax

If you consented to a 2-year contract, you would pay whatever the subsidized price of the phone was plus a one-time upgrade or activation fee of $40.  We’ll use an iPhone 5s 16GB, since that was the most common phone I sold before all this upheaval happened.  That phone would cost you $199.99 out of pocket (instead of $649.99 at full retail), and that upgrade or activation fee would be applied to the next bill.  For the sake of argument, let’s call that purchase $239.99.  (My state doesn’t have sales tax, so I’m ignoring that consideration on phone price in these examples.)

After two years passed, if you decided to keep your phone because it worked perfectly well and you didn’t want to lock yourself into another contract, your plan price would stay exactly the same: $89.99/mo + tax.  Okay, right?

Yes.  But that’s a bad thing.

You didn’t see it in your bill, but you were paying back your carrier for that $450 subsidy on the iPhone.  Your wireless company hid it in the cost in the overall price of your plan.  This is why these companies loved it when you kept your phone past the two-years, since it was essentially free money.

An Example of AT&T Next

These days, the carriers have new plans that are specifically built to enumerate the cost of the phone subsidy.  Here’s an example of a common plan I would put someone on today, using AT&T Next:

Mobile Share Value 3GB: three gigabytes of data with rollover, unlimited calling and messaging – $40.00

Line access fee: per-line price for each smartphone not in contract – $25.00

Total – $65.00/mo + tax

Now what happens if you want the newest iPhone and you put it on Next installments? The 24-month installment price for a $649.99 phone (i.e.: $649.99 ÷ 24) ends up being $27.09 per month.  Add that to the $65 plan, and that equals $92.09.  The first thing you might notice is that this price is $2.10 more expensive per month than the old Nation 450 price of $89.99.  Over two years, that would add up to be $50.40.

Remember, however, that with AT&T Next, you are not paying an upfront, subsidized price of $239.99 ($199.99 plus the $40 upgrade or activation fee).  So, you would save $189.59 over the old plan with the 2-year contract ($239.99 – $50.40).

You can extrapolate these numbers with multiple lines.  The pendulum swings further in the favor of the customer who has a 10GB or greater Mobile Share Plan, since the line access fee is $15.00 (compared to $25, as exampled above).

The Future

Not everyone qualifies for installments, credit-wise.  As such, there are still some customers who need to either agree to 2-year contracts, buy new phones at full retail all at once, or acquire used phones instead.  But I think this will change sometime in the future: AT&T recently changed the verbiage on its installment agreements from “there is no downpayment” to “if you have a downpayment”.  Perhaps customers with less than optimal credit will be able to still enter into AT&T Next but need to put some portion of the phone cost down (which would lower the monthly payments anyway).  I can envision a scenario where 2-year contracts disappear altogether after this happens.

Written by Michael

14 May 2015 at 11:44 pm

The Way Forward for Nintendo

Despite what is now a relatively healthy library on Wii U (after a pretty bleak launch window), both the Xbox One and the PlayStation 4 have surpassed the Nintendo console’s install base with relative ease. VGChartz lists the Wii U as having sold 9.1M units, compared to the Xbox One with 11.3M and PS4 with 19.1M. Even more troubling is that the Wii U has been out longer a year longer than both Microsoft’s and Sony’s offerings.

I don’t think Nintendo is necessarily trying to sell the most units, as though that were its singular measure of success, but I also don’t think the company wants to be a far distant third at the end of the cycle, as Strategy Analytics now predicts. The firm expects that by 2018, Sony will have sold 80M units, Microsoft 57M units, and Wii U 17M units. That is a mere 11% share for the house of Mario and Zelda. The saving grace for Nintendo is its strong software sales and the seemingly unstoppable 3DS, to say nothing of its enormous war chest funded by huge Wii profits in the previous generation.

There are a lot of articles out there that dissect the mistakes Nintendo has made with Wii U and attempt to explain why the console seems destined to turn in an even poorer performance than the GameCube (which was a cult hit but only sold 21.7M units), but I would summarize my feelings thus:

Underpowered hardware

The Wii U features hardware that’s not too dissimilar from the Xbox 360 and PS3. It is certainly more powerful than those 8-to-9 year-old machines, but it’s nestled somewhere between last generation and its new contemporaries. This is reminiscent of the Sega Dreamcast, which similarly came to market too early. The net result of this is that the Wii U is positioned well to receive game ports from last-generation consoles, but porting a current-generation title would be daunting and require so much downscaling that the investment in man hours would be difficult to justify. As development dwindles for those last-gen consoles, so will third-party titles for Wii U, I fear. A similar fate befell the Wii ultimately, but the long legs of the PS2 (which saw its last game released in September 2013) helped sustain it, so by then, Nintendo had sold north of a hundred million units.

Wii U Gamepad

The Gamepad is certainly a unique idea in gaming: it essentially offers a second screen to allow for a level of multitasking never exploited in console gaming. (GameBoy Advance to GameCube interactivity was interesting but never fully realized.) Problematically, however, is the fact that only one Gamepad packs in with the console, and worse still, the console will never support more than two of those simultaneously — if ever at all. Shigeru Miyamoto admitted last year that adding dual-GamePad functionality wasn’t even a part of Nintendo’s near-term goals. Instead, Nintendo envisioned that one player would operate the GamePad while the others used Wii remotes, which are themselves 9-years old now, presenting developers with the challenges of designing to accommodate asymmetric gameplay. It doesn’t help that the GamePad itself feels more like a toy than high technology. It also does not feature multitouch, an unfortunate oversight in a smartphone and tablet world.

Poor online support

Nintendo’s online multiplayer efforts have always felt timid and begrudging to me. In truth, I think that Nintendo would rather see players interact face-to-face in couch co-op rather than through the Internet — which old-time gamers like me can appreciate, actually. The problem is that some of the biggest third-party titles are designed with online gameplay in mind, so this strategy really only works well for Nintendo as a software developer, not companies like Ubisoft, Activision, or Electronic Arts. The future doesn’t look particularly bright on this front either. After all, we’re talking about a company that has consistently resisted technological progress if it wasn’t invested by itself: optical media in the fifth generation of consoles, full-sized DVDs in the sixth generation, and HD in the seventh generation. Even with Wii U, Nintendo opted for proprietary discs that cannot hold as much data as Blu-ray, which both the PS4 and Xbox One employ.

Poor third-party developer support

All of these things have led to poor support from third-party developers. Nintendo has long alienated these companies through various strong-arm tactics anyway, but these uneasy relationships have really damaged its ability to stay relevant in any generation with actual competition from other hardware manufacturers. New, envelope-pushing games won’t run on the Wii U’s hardware without serious compromise, the GamePad is something that developers are completely ignoring (instead hoping you’ve purchased the Pro Controllers instead — even Nintendo has relented to start focusing on these), and without strong online support, even DLC opportunities are bleak for these developers.

A solution

My idea would probably anger long-time Nintendo stalwarts, especially those loyalists who early-adopted the Wii U, but I don’t see another solution otherwise — reasonably, anyway1. That starts with completely abandoning the Wii U; I know that would be seen a deep betrayal, but there has to come a time when you admit defeat rather than continuing to beat your head against the wall.

What I would do is abandon attempts to embrace unusual input methods or gimmicks of any sort. While the company deserves a lot of credit for hardware innovation over the years (the ability to save progress in a game, the basic layout of the modern controller, triggers, analog sticks, and rumble among them), they’ve whiffed on motion controls and this faux tablet. (Despite Wii’s incredible sales figures, motion gaming never garnered enough hardcore-gamer interest to matter in the long term. Microsoft and Sony’s attempts to answer the Wii, in the form of Kinect and Move, seem pointless in retrospect, don’t they?) Even the 3D technology found in Nintendo’s handheld is mostly superfluous.

I would recommend that Nintendo release a new home console that is largely based on the Xbox One’s specifications. I suggest this because chasing the PS4 would be likely too expensive. Instead, Nintendo could simply match the Xbox One’s hard drive space, processor, GPU, and RAM, and probably price this hypothetical console at $299. By embracing x86-64 based processor architecture and Open-GL graphics standards, developers could easily port current-generation titles to work on Nintendo’s new system. Further, the existing Pro Controller for the Wii U should serve as a model for the next generation’s, as it is largely held in good esteem. I also believe that this generation will last for more than five years, so there is still time to capitalize on it. Further still, omit the media functionality altogether: smart TVs and set-top boxes are obviating the need for this, anyway. This would be a gamer’s machine.

The complaint from Nintendo employees and fans alike will be, “Well, how would this console differentiate itself from the competition?” The answer, of course, is that Nintendo produces some of the best games in the world, and they’re only available on Nintendo hardware. That’s the hallmark. What I want Nintendo to realize is that its contributions to the world of gaming no longer lie in zany sensors or strange peripherals; Nintendo’s most important contribution is its software library. This company shepherds some of the greatest and most historic franchises in history, all of which ooze with clever ideas and fine craftsmanship. For all the problems AAA games have had in the last year or so, with so many broken at launch, Nintendo deserves credit for consistently releasing games that WORK. The few games Nintendo releases each year is the only thing sustaining the Wii U right now, but these titles are keeping the system alive. And let us not forget the incredible breadth the Virtual Console spans.

This hypothetical console (which I would love if Nintendo named something that harkens back to its history, like NES Ultra) would not be the best-selling one by a long shot, but it would appeal to gamers who love Nintendo games but still offer them access to third-party titles. Nintendo’s online efforts would never compare to Xbox Live or PSN, admittedly, but there are enough gamers for whom that wouldn’t be a deal breaker, so long as they could at least play those multi-platform games without having to own multiple consoles.

With some measure of longing, I admit that the last Nintendo console I loved was the GameCube, even with all of its shortcomings and poor sales. I want to love another one — a piece of hardware that is more concerned with fostering amazing game development rather than trying to define itself with gimmicks.

1. The most prominent idea I’ve heard involves focusing all hardware and software efforts on a new handheld that can stream its content to TV.  This would be like an inverse Wii U, where the GamePad holds all the console’s intelligence and streams its content to a TV-attached receiver.  This would be clever, but I feel that handheld consoles are destined to lose to smartphones, especially as time spent playing mobile games continues to increase.  Moreover, the kind of technological miniaturization required to stand toe-to-toe with iOS or Android devices would be challenging to do in terms of engineering as well as in manufacturing scale.

Alternatively, the other idea I’ve heard is that Nintendo should abandon hardware development altogether and create games for PS4 and Xbox One.  This is an intriguing notion to be sure, and one that I would selfishly enjoy given my current investment in console hardware, but I really believe that Nintendo should give one more crack at this.  The home console space was dead and buried after the Video Game Crash of 1983, but the NES single-handledly resurrected it.  Seriously, long live Nintendo — if for nothing else.

Written by Michael

24 February 2015 at 11:51 pm

Posted in Games, Musings

Tagged with , , , , , , ,

Why I Don’t PC Game Anymore

I have a Mac.

I was very tempted to leave this article exactly as that one sentence, but that would have been disingenuous even if it were amusingly trollish.  That said, I haven’t concentrated on very many computer games since switching to Mac OS X, but I have dabbled here and there with such titles as EVE Online, Starcarft 2, Minecraft, and even Skyrim (via Windows in Bootcamp).  I have a number of unplayed Steam games to get to one day, even.  However, for the purposes of this blog, I will transport my mind back to my Windows PCs in college, back when I used to game heavily on them.  Considering this, I remember how the longer I owned a PC, the less valuable it felt to me.  Despite the progress that new programming and graphical techniques inevitably bring, games start to not look as good — not by comparison to other, newer gaming PCs anyway.  Contrast this with my PlayStation 4: I know that a PS4 manufactured five years from now will not outperform my launch unit.  It’ll be smaller and run cooler and be less expensive, but it will still output the very same images to someone else’s TV as my original does to mine.  But in the world of PCs, a five-year gap is like an epoch of computing.  It almost seems like you need to upgrade your video card every other year to keep the games looking just like the launch trailers for them.  That’s before we start getting into the mess of having to download the latest drivers, tweaking everything just so for maximum frames-per-second, and then still learning to be disappointed that such-and-such-game was actually fine-tuned for an nVidia card, not an ATI, so we’ll never experience it exactly in the way the developer wanted.

The opposite is true where my consoles are concerned: the longer I own them, the more valuable they seem to me.  The games look better and better as new ones release, they get more epic in both writing and gameplay, and the overall library of available titles grows so large I could never have any chance of completing them all.  Consoles improve with age by their very nature.

And while it’s true that contemporaneous PC games have better graphics, they need to be more powerful by necessity in order to draw quality images on a higher-resolution display than that of an HDTV, and they also need to hold up at two feet rather than eight.  When taking these distances into account, the differences aren’t so vast.  Moreover, I can’t shake the feeling that sitting at this desk, hands on this keyboard, feels like work to me.  Reclined on the couch with a controller, on the other hand, feels like leisure.  I know there are many out there that feel differently, who don’t mind and are actually enraptured by the idea of constantly tweaking and tinkering with their software and hardware as new games come out.  Or even if they’re not into those things, they don’t mind how each successive year of games requires that they move those graphics sliders further away from Maximum and closer and closer to Low.

Seriously, though, I appreciate that there’s no such thing in the world of consoles.  I also appreciate that abandoning PC gaming has led me to keeping this Late 2009 iMac around longer than I usually keep computers, and that it still doesn’t feel slow to me because I haven’t thrown the latest graphics powerhouse game at its now outdated and underpowered video card.  While PC gamers will often cite that they can build a better game system for the same amount of money that a console costs, it’s often a false economy when considering this.  To stay current is to commit oneself to countless upgrades — more RAM, a faster solid-state drive, a new video card with more pixel pipelines or whatever.  If you’re passionate about this enough, you’re even upgrading your cooling system so you can overclock safely.  It’s truly mind boggling to me.

My disinterest with micromanaging my computer is probably the single most significant reason I switched to the Mac platform, but this realization about the value of PC gaming compared with console gaming is the reason I likely will remain there permanently, even if those Steam Summer Sales are pretty awesome.

Written by Michael

4 August 2014 at 12:08 am

Posted in Games, Musings

Tagged with , , , , ,

The Last of Us: One Night Live

I just got done watching The Last of Us: One Night Live, a theatre-style rendition of several pivotal scenes in The Last of Us featuring the original cast members, including Ashely Johnson (Ellie), Troy Baker (Joel), Annie Wersching (Tess), Merle Dandridge (Marlene), and Hana Hayes (Sarah). Interspersed throughout were live musical performances by the game’s composer, Gustavo Santaolalla, of some of the game’s most memorable pieces, which were haunting and inspirational all at once. Neil Druckmann, the scriptwriter, set the stage for each scene throughout the evening as well, directing what was a truly magical show.  I can think of no other game or game studio with the gravitas to pull this off so well.

Most importantly, this live show managed to remind me how incredible game development is in the creation of narrative, how many disparate elements come together to form a harmony. While there are many examples of great games that do all of these things very well, The Last of Us nevertheless manages to be much greater than the sum of its parts and find a way to not just be fantastic, like many other examples, but actually become a hallmark of the medium.

This title’s gameplay and graphics, while all excellent, are not the high-water mark we would judge other games against in this new generation. That said, this confluence of narration, formed of writing, acting, programming, and artistry, will long remain the standard by which we must judge game storytelling going forward. The Last of Us has taken upon its shoulders the whole medium and raised it to a height many of us didn’t know was possible. I believe this unexampled success will remain its greatest achievement long after its lofty sales numbers are forgotten and its gameplay and graphics are rendered outdated and quaint; the power of its storytelling is indelible.

It is my great honor to play it once more, remastered on PS4.

Written by Michael

28 July 2014 at 11:39 pm

Posted in Games, Musings

Tagged with , ,

The All-Digital Future of Gaming

Many will protest before it’s all said and done, but it sure seems inevitable that digital will eventually replace all forms of physical media, whether we’re talking about music, video, books, or games, and in many cases nearly has already.  I’m sure there are still a shockingly large number of CDs sold every year, but MP3 players (and now the phones that have this feature built in) have crushed physical media and relegated it to a margin.  Streaming services like Netflix have severely wounded disc-based industries like Blu-ray, despite offering an objectively lower quality version of that media.  And like many, I haven’t purchased a physical book in years, thanks to my Kindle (and Kindle app).

But what fascinates me the most is the all-digital future of video games, which represents an especially hefty engineering challenge.  Games are increasingly large, on the order of dozens of gigabytes for the latest consoles, far outstripping other forms of entertainment.  Books?  Tiny.  Music?  Small.  Video?  Big, but heavily compressed.  Some wonder if the Internet structure itself could sustain such a future, especially if streaming gaming takes off, like with the OnLive service or Sony’s forthcoming PlayStation Now service.  (It’s not enough that Sony has to build a powerful enough backend for this service, complete with monster bandwidth – the way telecoms route traffic is critical to shaving off precious milliseconds of latency to make twitch-based gaming viable, especially when considering first-person shooters.)

While an all-digital strategy for ownership is in its nascent form still, I have nevertheless embraced it for PlayStation Vita and PlayStation 4.  Despite owning 30+ games between the two consoles, not a one of them is printed to physical media.  There’s an inherent risk in this, I suppose, if Sony were to fold its gaming operations someday and I was no longer able to download said titles.  But I also think that such a development would portend an end to console gaming as we knew it anyway, so I’ll stick my head in the sand on this risk because digital affords me some impressive conveniences, including the ability to have my entire game library with me at all times, even if I ever travel with these devices.  No need to cram a duffle bag full of jewel cases with games I think I might want to play while away, risking damage and theft to them.  Moreover, I can log into someone else’s PlayStation 4, for example, and download said title.   Better still, a side effect of digital gaming is cloud saves, which literally makes everything better.

Of course, this all depends heavily on having a good ISP, which seems like an oxymoron of a description based on today’s headlines.  I consistently get about 30 Mbps on my home Internet, so even a monster 38.5 GB title like Killzone: Shadow Fall will only take around three hours for me to download, and now with the ability to pre-load a feature of the PS4, this can be done ahead of a game’s launch.  Further still, PS4 also allows you to start playing games before they finish downloading by prioritizing the game’s essential frameworks and earliest levels first, saving end-game content for last.  I also appreciate that I can purchase a game from the Sony Online Entertainment Network Store (mouthful, no?) and command it to download to my PS4 or Vita remotely, which are always both in standby mode.

Even so, digital is still imperfect.  We have still yet to see a significant discount digital movement, meaning gamers should pay less for a non-physical piece of merchandise – ideally, $5 to $10 off of the $60 MSRP, actually.  The stores themselves leave a lot to be desired in terms of peruse-ability.  And returns are also quite dicey, as they stand today.

Allow me to describe a situation I just recently went through with Sony Customer Service, actually: I decided to purchase The Walking Dead: Season 2, Season Pass for the PS Vita, against my better judgment.  (I was aghast the technical problems the first season had on Vita and am astonished that it made it through quality testing, honestly.)  But I wanted to preserve the choices I made in season 1 by transferring that save to the new season, so I stuck with this platform choice.  However, while making the purchase, I selected the PS3 version by mistake – chock it up to fatigue, as it was late at night, or my better judgment trying to save me from a lag-fest.  In any case, I was forced to hop on with Sony’s over-the-web support people and explained the issue.

The two people I ended up chatting with were both very nice and clearly wanted to be helpful, but the first agent made it a point to explain that according to the PSN Terms & Conditions, all sales are final.  Nevertheless, he forwarded me to someone who did have the authority to issue a credit anyway.  The entire encounter took less than fifteen minutes, but the end result was that I would need to wait 7-10 business days for said credit.

Again, I’ll re-iterate that this was my fault, but this multi-day timeline’s a little excessive in the year 2014.  Either way, I received an email less than a week later saying I had been credited the money, but that this was a one-time dispensation and that Sony Support would not do this again for future problems.

I feel like this is a decidedly draconian way to do this, requiring direct human intervention and restricting it to a one-and-done situation.  Here’s how I think an exchange or return process should work on a digital store:

  1. Make an incorrect purchase.
  2. Realize mistake, click Return button on game within one hour of purchase (longer requires human intervention).
  3. Be credited automatically (store credit) and have item removed from purchase history (or expired so title can’t be downloaded).

This should be especially easy if the logs show that the purchaser hasn’t downloaded the game in question to any device yet.  If the customer has indeed downloaded the game, then human intervention may again be required to ensure it was removed from said piece of hardware.

A cleaner store interface would go a long way to preventing mess-ups like mine, but let’s be honest and admit to ourselves that they will still happen nevertheless.

Dependable infrastructures, long-term confidence these companies’ futures, digital discounts, modern store UIs, and progressive return policies are the necessary ingredients for this all-digital future.  Even then, while many will still want to cling to their pressed plastic discs, but just like the music CDs, video discs, and books, physical games will also occupy an ever diminishing niche.  For the collector or paranoid among us, that’s great too.

But I couldn’t be more excited for this complex and awkward transition we’re just now entering.

Written by Michael

6 May 2014 at 11:07 pm

Posted in Games, Musings

Tagged with ,