In the traditional news cycle, politicians, major organizations, and anyone with a PR brain dump stories into the week before Memorial Day (and sometimes the week after) in the hopes that no one will notice. Generally speaking, no one does notice because most of America is focused on the three-day weekend and the unofficial start of summer. Since this is a US-only holiday, the rest of the world goes on and wonders why the US drops out for a few weeks. Europe, for example, seems to take its summer vacation in August (all of August, in some countries), rather than piecemeal across a three-month period.
Which is why most of the interesting discussions in the past week have taken place in Europe or on the blogosphere. (Although, even on the blogosphere, it seems, Americans vanish. My readership is always down around Memorial Day, and it creeps back up slowly as the summer months progress. People look at more pages during that time too, apparently catching up on all that they missed.)
I’ll get to the European discussion in a moment. First, though, a story that got dumped into the silence that is the week before Memorial Day.
Remember last February everyone involved in traditional publishing pointed to the big flap between the distributor Independent Publishers Group (IPG) and Amazon? IPG pulled all of its titles from Amazon because of a contract negotiation in which Amazon demanded something IPG considered unreasonable.
When IPG pulled the plug on Amazon, it left countless authors who were published through small and regional presses without any ability to sell on Amazon. The blogosphere went nuts, partly through traditional publishing and partly with these writers stuck in a situation out of their control.
Through it all, Amazon was portrayed as the Big Bad, asking for something terribly unreasonable, and the only choice IPG had was to pull books. I blogged then about IPG doing what any good business does in a tough negotiation. (If you follow the link, note how dated that post is: all of the flaps have ended and have mostly been forgotten.)
In that negotiation, IPG upped the ante. IPG said that if Amazon wanted to play hardball, IPG would play too. IPG didn’t cave; it continued to negotiate.
Three months passed. Occasionally, IPG’s CEO would blog about how difficult the negotiations were, but those negotiations never ended. And, magically, last week, the two companies came together, signed a new contract, and voila! IPG’s books are for sale on Amazon again. Just like I told you they would be. Everybody screamed about Amazon, when Amazon was doing what businesses do—negotiating terms. IPG never quit negotiating either.
IPG’s president refused to discuss the terms with the media, and told his clients (whom I assume are the small publishers, not the writers), “”I feel that the experience has clarified some things for us and our clients, and that now we are all even better equipped to navigate through this rapidly changing industry.”
The story was dumped onto Friday afternoon so that no one in the industry would notice. Writers will discover their titles are back up during the next few weeks, and everyone will forget that IPG was part of the big Amazon-Is-Evil flap.
The point of this week’s blog isn’t the IPG flap, but the way that the media can be manipulated. It happened last week with The Taleist’s survey of indie-published writers.
On Thursday, The Guardian in the UK published an article with this headline: Stop the press: half of self-published authors earn less than $500. Initially, I blamed The Guardian for that headline, not because the information was incorrect or even because it’s inflammatory. (Which it isn’t, given what the Taleist wrote—but I’ll get to that.)
It was because I had taken that survey. And while I knew that the questions were incomplete (and somewhat biased), I also knew that the survey writers were trying to be comprehensive. I think it took me 15 minutes to fill out the damn thing.
I figured the Guardian’s reporter, Alison Flood, didn’t drill deeply into the numbers. Maybe she did, maybe she didn’t. But I must say that the bias in the article isn’t hers. If anything, she toned down the results of the survey.
Let me explain something, ancient recovering reporter that I am. When Reporter A (who used to be me) is working under deadline, she doesn’t have time to read 60+ pages of a newly published survey. She must rely on the conclusions published with the survey. She will dig into the numbers that fascinate her, or might seem newsworthy, but she doesn’t read the entire survey.
For the past two days, I’ve been trying to read the entire survey. I bought the e-book (yes, they released it as an e-book) so you don’t have to. I’m not going to link to this because of the commentary and the bias. If you want to spend your own money reading the numbers, go ahead. But I’m not going to encourage it.
Not because I’m mad at the survey. The guys at the Taleist did something that needs to be done. Someone needs to survey writers who are publishing outside of traditional publishing, and get some hard facts and figures.
Unfortunately, the Taleist’s survey is not that survey.
First, problems with methodology, which the Taleist guys freely admit.
1. The survey is self-selecting. Any time you get a self-selecting survey you immediately run into the problem of bias. Bias can cut two ways. You’ll get the people who love, love, love whatever it is, and you’ll get the people who tried it, hated it, and want to tell everyone about whatever it is. The Taleist guys know that’s a problem, and they tried to compensate for it. Unfortunately, they compensated through their bias, and that caused additional problems, which I will get to.
2. The survey is anonymous. Well, hell, I could have participated five times from five different computers if I wanted to. You know the old Chicago joke: Vote Early and Vote Often. There’s no accountability in an anonymous survey. The survey writers don’t even know who participated. And they didn’t try to verify any information they got.
This is not unusual in such surveys, by the way. And yet those of us who love numbers will often take this stuff as gospel. It’s not. Because I could, on my five different posts, give five different numbers, none of them accurate, and no one would know.
3. The sample size is too small. The sample size is 1,007 people. The Taleist guys tried. They really did. I saw notifications about this all over the web, and I came over to take the survey. I did not, however, tell a listserve I’m on with dozens of indie writers (all of whom are making money), nor did I tell Dean or anyone else about it. In other words, by myself with some prodding, I could have added 10% more respondents to the list without even trying.
1,007 is a ridiculously small percentage of indie writers. Let me show you why. Mark Coker, on his year-end blog on Smashwords, a distribution service that many indie writers do not use (preferring to publish only on Kindle) says that 38,000 writers and small publishers around the globe used his service in 2011. That was up from 12,000 in 2010. I’m sure more use it now.
Look at those numbers: 38,000 writers and small publishers who actually know about Smashwords. Most indie writers use Kindle Select or other exclusive services, which means that they do not use Smashwords because of the exclusivity. I’m not even going to hazard a guess as to how many writers and small publishers there are publishing outside of traditional publishing right now because I will be wrong. But 1,007 is a terrible sample size. [Later: see the comments for notes from statistics folks on sample size.]
4. The questions were bad. Many, many times as I took this survey, I had to chose between set answers that did not apply to me at all. I had to pick the least offensive one.
Questions always show a survey writer’s bias. And ironically, the Taleist guys had some serious pro-traditional publishing biases from the start. I say ironically because they indie-published the results, and if they sell about 150 copies of their survey (which I’m not helping with), they will easily make that $500 that they’re talking about as the mean income for writers.
(And, speaking of bias, these guys titled the survey results Not a Gold Rush. Apparently, they thought indie-publishing was a gold rush. Why else would they have chosen that title considering all the other things this survey asked about?)
The Taleist guys have no idea what real writers are like. How much they write, how much they publish (traditionally), and how long their careers last. These guys are shocked that most of the respondents were women of a certain age, who had been in the business a long time. (Maybe longer than the survey writers had been alive.) The survey writers postulated it was because a lot of romance writers responded.
Nope, guys, it’s because your survey attracted real writers. The folks who make a living at it.
Here’s a sign of bias: When the survey writers asked what people did for a living, they did not allow the respondent to answer that they were self-employed. I remember that.
They write, “We focused on hours of employment and did not offer an option for self-employment. We both know people for whom self-employment might mean 100 or more hours of work per week and for others it might mean writing website copy.”
Okay, fine. But taking self-employed off the list of choices was just silly. Because they still could have looked at hours worked and measured it, then pointed it out in the responses. (Half of the self-employed people only spent 100 hours at their jobs—and oh, look, they were the ones who earned less than $500 that year. D’oh.)
I remember staring at that for a while, wondering what the hell these guys meant? I couldn’t answer the employment question honestly, and then the survey asked me how much I worked at whatever it was I did? Confusing. It pissed me off. I’ve been self-employed my entire life. I would have quit responding right there, but I had already invested too much time into it, so I finished.
I had these kinds of problems over and over and over again with this survey. The questions were awful.
5. The survey writers used their bias to tally the results. From the overview, “We believer there are also mistakes in the answers. Question 17 asked, ‘How many books have you self-published by year?’ One respondent wrote ‘16,000’ which was enough to set the average number of books self-published by author in 2011 as 112 with the help of a few similar responses. While we speculate later in this report that authors have responded to the new self-publishing channels by pulling manuscripts from the bottom drawer to publish, it would be a hell of a career (or bottom drawer) that held 16,000 manuscripts…Even publishing 112 manuscripts would require publishing more than two books a week.”
Clearly that 16,000 was a mistake. But they’re doubting 112. Dean and I published more than 270 e-books since 2010, and we still have a lot of backlist to go. For those of you who are math-challenged, that 135 books per year. (Although it didn’t work out that way. It was closer to 100 in year one and 170 in year two.)
Here’s the kicker:
“We assume that the respondents misunderstood the question as ‘How many books have you sold by year?’ Where we found individual responses that significantly affected the results and that we did not believe to be accurate, we have corrected the data by filtering out these responses.” [Emphasis mine]
In other words, they probably believed that our 270 manuscripts were impossible to do in a year, so they filtered it out. And God knows what else they filtered out.
So not only is the data gathered suspect, but the report is as well. They tampered with the evidence they gathered.
Bad questions, bad methodology, bad analysis.
If The Guardian reporter and all of the others who used this survey as in the words of The Guardian, “one of the most comprehensive insights into the growing market to date,” had actually read the report, they would have realized this survey is invalid.
Which is a shame. Because initially, when I planned to write this blog, I was going to discuss the hopeful stuff in the survey that you can find without the bias. I mean, if the information were accurate—and it’s clearly not—the fact that the average earnings for a self-published author is $10,000, and the median is $500. That’s incredibly good.
Because if you had done the same kind of survey of writers who have published at least one thing over three years ten years ago, the average earnings might be higher (depending on who responded) but the mean would be lower. Most writers who published one or more things in three years earned nothing in two of those years. Nothing.
And as to some of the other conclusions, that clearly the traditional publishing gatekeepers work, because the writers who are doing the best were traditionally published—hogwash.
What that means is that we traditionally published writers have a built-in backlist and at least 1,000 true fans—the folks who buy our work repeatedly. It takes more than three years to build up that kind of audience, and it takes publishing more than one thing.
It has nothing to do with gatekeeping. It has to do with readership. Ask the same question of indie writers five years from now (in a legitimate survey) and watch that number go up, as indie writers build an audience by publishing a lot.
We need a good comprehensive survey of writers who are stepping outside of traditional publishing. Mark Coker can’t crunch his numbers and reveal this information because Smashwords doesn’t have access to the numbers for writers who go direct to Amazon or Barnes & Noble or the iBookstore.
But his survey would be a hell of a lot more accurate than this Taleist one is.
I would love to know how well indie writers and small publishers are doing. Unfortunately, it’ll take years to find out. The normal ways of learning this information are no longer available. In the past, writers organizations used to survey their members, but most writers organizations do not accept indie published writers even if their work sells tens of thousands of copies per month. Surveying writers with careers longer than five years doesn’t work because the indie publishing revolution isn’t that old yet.
So we’re going to have to wait. But when someone quotes this survey and makes it sound like indie writers aren’t doing well, tell that someone about the methodology. This survey isn’t valid, no matter what the press says.
This blog wouldn’t exist without new ways to crowdsource a project. I write every week because you guys return and because you donate to keep me writing nonfiction. I make my living off my fiction writing—and have for longer decades—so writing nonfiction is always a risk.
You guys have kept the blog alive for more than three years with your comments, e-mails, links, and donations. So if you found something useful in this post, please leave a tip on the way out. That’ll guarantee there will be more posts in the future.
Thanks so much!
“The Business Rusch: “Not A Real Survey,” copyright © 2012 by Kristine Kathryn Rusch.