Random thoughts: The problem with product reviews

Reviews, huh? What are they good for? Before you finish that refrain, the answer is more complex, but the issues around reviews are also more complex too.

Random thoughts is, as the name implies, random. Also, I’m thinking out loud on the page, so this could be structurally messy.

Reviews: The access dilemma

Here’s a sneak peek behind the scenes of the average product review. A manufacturer has a great new widget, and it wants to sell millions of them, because business. It could advertise, but that costs money, and significant sums of it for publications with a wide audience.
A little cheaper, as you might imagine for smaller publications, but they you’re “buying” fewer eyeballs, and with the rise of Internet-based publishing, you run into that whole sticky issue of people using adblockers because they’re tired of endless ads in the first place. There’s a whole new market of what’s called “native” advertising, where advertising pretends to be content (and, I’ll be totally honest here, I’ve written some native advertising pieces as a freelancer myself, because bills need paying and all that), but like advertising, native advertising involves upfront cost.
A review, on the other hand, can be incredibly cost effective. Provide a product or service for a period, get it written up in a way that people want to read and (if your product’s as good as you think it is) it’s arguably better than advertising, because it’s a third party advocating for it, warts and all. The warts are important, though, and I’ll get back to that in a minute.
To answer the most frequent question I get asked; No, typically I don’t keep review products. They mostly go back. I’m fine with that.
Anyway, that’s the way that 99.99% of the tech product reviews you’ll ever read work. There are some niche exceptions — CHOICE in Australia being an obvious one where they buy their gear for example, and I’ve nothing against CHOICE (Disclaimer, I’ve written for them too) — but that’s mostly how it goes.
What’s interesting, of course, is how it’s stage managed. There’s plenty of “softer” targets for reviews who pretty clearly get expedited access, and I can see from a vendor’s point of view why you’d do that, because again, low risk for a sympathetic, generally non-analytical target.
There’s also the vendors who simply never go in for review products. I hit one earlier this week; a notable Australian tech industry success story who’s quite happy to talk up about how wonderful the Internet is for information (it’s been very good to him and his low-cost tech products) but who won’t seemingly offer up review products for critical examination.
Says a lot about the faith in the product’s quality, I think.

Reviews: The scoring dilemma

UC4
I don’t score reviews here at Fat Duck Tech. That’s totally a personal call; I do get how people like to use them as a shorthand to assess a product quickly, but I’d much rather let the words do the talking for me.
Still, every once in a while, shitstorms erupt around scoring. Last week it was the review-in-progress of Uncharted 4 that IGN’s Lucy O’Brien wrote up. Nice little piece (you can read it here) but one that made certain corners of the Internet go frothing-at-the-mouth-insane.
Why?
Because she’d given it a review score of 8.8, noting that it’s a review in progress due to the lack of multiplayer testing. Presumably if multiplayer is great that score might rise, and if it’s woeful it could drop.
8.8, the online gamergate-centric-idiots frothed, was an outrage for a AAA rated game, and this was clearly a feminist conspiracy against Sony, with Lucy incapable of opening a game box, let alone properly score such a game… or some such idiocy. You know, that whole women-unable-to-do-things-because-lady-parts-get-in-the-way-nonsense that somehow is still going on and on.
(If all this talk confuses you, buy this book. Trust me.)
The single best satire of the whole messy business was, predictably, found at Point & Clickbait. Go ahead. Read it. It would be laughable, if it didn’t involve moronic online harassment.
Yeah, I’m with Lucy on this one, for two reasons.
On the minor side, 8.8 is in no way a “bad” score. If a game interested me, and it got an 8.8, I’d be highly likely to buy that game. Hell, while I don’t score things here, I do and have for other publications as required, because that’s what you do as a freelancer.
I once scored Backyard Wrestling: Don’t Try This At Home a 1, because it was (and is) hopelessly inept. That got me a very cranky phone call from the Atari PR manager at the time, but I stood by that review, and that review score.
Anything can (and should) work within the entire framework of your review scoring system, and while it’s no fun to review a 1/10 title, if that’s what it’s worth, that’s what it should get. That’s a bad title. An 8.8 is perfectly decent all on its own, let alone for a work in progress review.
(disclaimer: I used to work at Ziff Davis Australia, where Lucy works, when I was editor of PC Mag Australia. At this stage, phantom hackers notwithstanding, I’ve pretty much worked everywhere in Australian tech media and in this case I was working in a nearly-always-out-of-office-capacity. I think I’ve said hello to Lucy in person maybe twice. Make conspiracies of that, if you must.)

Reviews: Objective or Subjective?

Have a random hedgehog picture, just because I can't think of a good way to illustrate this bit. Sorry about that.
One of the most common refrains I hear about complaining about reviews is this notion that a review should be 100 percent objective.
It’s utter rubbish, of course. Reviews are subjective.
They are, and they should be. A totally neutral review may as well be a blurb from the back of the box with some meaningless numbers next to them, of little or no use to anybody. Yes, some folks online do write that kind of thing, and it’s a tedious trudge through boredomville. Bad reviews, in other words.
Whereas a good review will naturally take into account the reviewer’s own subjective viewpoints on any number of criteria. Those could be personal outlook issues, but equally they could be relevant to the cultural standpoint of the publication, its marketplace, its readership, just about anything. All valid, and, if the review is any good, usually fairly transparent to the end reader.
After all, people attach to movie critics that they like, and not to ones that they don’t. I generally tend to prefer, say, Marc Fennell’s movie reviews to those of David Stratton, not because David’s a bad reviewer (I suspect he’s seen just a few more films than Marc, although he’s had a few more years to do so), but just because his viewpoints don’t always coincide with mine. That doesn’t invalidate the esteemed Mr Stratton’s review in any way. It just means that I’m personally less likely to read it, or if I do, more often likely to disagree with it.
The target market of the publication also plays its part. To throw it back to tech, I’ve written in the past for the now-defunct Atomic, and a review for them would be different to something I might write for the Sydney Morning Herald, because Atomic‘s audience was much more hardcore tech where the SMH tends mainstream consumer. Write for an evangelical magazine and you’re unlikely to make jokes about God, write for the IPA and you’re unlikely to effuse about how wonderful unions are. I could go on, but I think you get the point.
It’s perfectly fine to disagree with a review, by the way. Feedback, of the constructive variety is no bad thing, and no reviewer is immune to goofs, whether they’re small scale (a figure in a table being a typo) or larger scale (a missed feature or misunderstood idea). Write your own criticism of a review if you really must.
But you know what you’d be doing at that point? Applying your own subjective criteria to a review.
Or in other words, gotcha.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.