[mediawatch] KBRM

New kid on the mediawatching block is the neutrally-titled Kiwis for Balanced Reporting on the Mideast. KBRM’s website says it started during the Israel-Lebanon war in response to what it saw as heavily biased reporting. It now exists to monitor New Zealand news media (in particular daily newspapers) and call them to task when necessary, and to promote ‘the other side of the story’.
Long-time readers of this blog will know that my sympathies for the state of Israel are limited. Some of you won’t share this view. I’m going to try and approach KBRM without relying on assumptions you might not share; you should know where I’m coming from in case I don’t succeed.


Front-Page Rhetoric
There are two things I wish to address. First is a chunk of rhetoric on the front page of the website, talking about the reportage of the Israel-Lebanon war:

While anti-Israel bias may have been in existence earlier, the problem was exacerbated during the war to the point where Israel was portrayed as cruelly causing devastation to innocent neighbors, rather than as a country which wants peace with its neighbors but is forced to fight for its life.

This text points out two ways of portraying Israel: one which is wrong and biased (cruelly causing devastation to innocents) and another that is truthful (wanting peace but forced to fight for its life).
The second view, the “truthful” view in this construction, is clearly mythologising. The language “forced to fight for its life” has no place in discussion of inter-state relations. Furthermore, it simply isn’t possible to claim with a straight face that Israel is simply innocent with no case to answer; the amount of international consternation isn’t all the result of biased media reporting and anti-semitic conspiracy. They are promoting a mythology.
Of course this might just be bad copywriting, and not reflective of the complexity of their actual activities. Clearly it’s a signal for concern, but of itself it is hardly reason to ignore the group. Sometimes organisations can be far more complex and reasonable than their written charter may suggest. (Indeed, buried in correspondence on the site is a more reasonable position: Of course Israel is not beyond criticism, but we believe that criticism should be “balanced and proportionate”.)


Scoring and Balance
The second, and more important, thing I wish to address is the ‘scoring’ process for evaluating balance in newspaper reporting. This is discussed here. Note that significant emphasis is placed on the objectivity of the method:

Articles and cartoons that deal with the Arab-Israeli conflict are rated for balance by a simple objective criterion. If more space is given to description of damage or hardship suffered by Arabs and to statements or quotes that blame Israel, the rating is P. If the reverse, the rating is I. If the space is the same, position and emphasis (including headlines and photos) become the determinant. If there is no imbalance in space or emphasis, the rating is 0. Thus a rating of P or I can indicate anything from small imbalance within an article to a full-page propaganda piece.

This methodology fills me with trepidation. This is not due to the claims for objectivity. It is, for amateur media analysis, a relatively objective approach; counting paragraphs that mention either suffering or blame isn’t too bad, although it obviously isn’t up to academic standards. Where it runs into trouble is its construct validity. Does this counting-and-comparing technique actually tell us anything about the balance of the articles, as it claims to?
Well, yes and no. Yes, in the sense that it tells us whether an article has more A than B in it, or vice versa. In a certain sense, that could be read as “balance”. But take the methodology one step further and it all falls apart. The scoring methodology is founded on the notion that “more A than B” tells us something useful about balanced media coverage, but it doesn’t tell us anything of the sort.
Descriptive Information Gets Us Nowhere
The data gives us information on the frequency of A and B. Completely missing from the picture is any indication of the correct frequency for A and B. In other words, we are asked to draw conclusions about balanced coverage without reference to what is actually going on.
The data is, in fact, entirely useless for its stated purpose of telling us about balance.
Consider a situation where A commits 75 horrific acts against B, and B commits 25 horrific acts against A, and each is reported once. If this scoring process was used on the newspaper doing the reporting, it would come up with a dataset in which 75% of articles condemn A and 25% of articles condemn B. Does that mean there is a bias against A?
Reverse it. If there is a dataset in which 75% or articles condemn A, and only 25% condemn B, what conclusion are you meant to draw? That the coverage is accurate and A caused three times as much misery as B? That the coverage is biased against A because the articles should show a 50/50 split? That the coverage is biased against A because B is in fact the source of most of the misery? In fact, there is no conclusion you can draw. All you have is a stack of coded data that doesn’t turn into information, because it doesn’t tell you anything.
The only way this data is useful is if it is contextualised by external knowledge. If you know that Israel and Palestine are roughly equal in responsibility for the conflict and its attendant suffering, then you can draw useful conclusions about media balance from this data. Or, say, if you know that Palestine bears most of the responsibility while Israel is fighting for its life, then you can draw useful conclusions. You need to know something outside in order to make sense of the data. The only problem is, as soon as you draw in external knowledge, you’re back in the realm of the subjective. How do you know who bears the most responsibility?
So this scoring method doesn’t in fact measure “balanced reporting”; it is purely descriptive, describing what was reported but not giving any information about whether this was balanced reporting or not. The only way you can get any knowledge about balance is by applying your prior assumptions about what the balance should be to the overall distribution of results.
The scoresheet is worthless on its own terms as an indicator of balanced reporting. The KBRM are, apparently, blind to this. There is nothing on the website to tell us how they expect us to interpret their results. I can only presume they think it is self-evident; they explicitly state the purpose of KBRM is to oppose a prevalent anti-Israel bias in reporting, so presumably they see the results as evidence of anti-Israel bias. In fact, they aren’t evidence of anything much at all.
What Is Balance, Anyway?
The “scoresheet” approach presumes that balance can be addressed by tallying how many bad stories are told about each side. We’ve already seen that these tallies are useless without external knowledge; they don’t tell us anything directly about the idea of balance.
However, these tallies fail in another way as well – they ignore other aspects of balance.
I talk here about just one such aspect, of particular relevance to the Israel/Palestine situation. There are other aspects which are also left unconsidered by the scoring data and by the KBRM.
Balance also means that all material necessary to understand the facts is being fairly presented.
The Israel/Palestine conflict is a profoundly unequal one; Israel is very strong, and Palestine is very weak. This is crucial contextual information in order to understand the nature of the conflict. Balanced coverage of the conflict would inform readers of the power differential between the two parties. This is almost never done.
For example, the state of Israel makes a careful point of obscuring the power differential. It always insists that peace can only achieved if both parties make sacrifices, emphasising an implied equality. (And, in fact, it always insists that Palestine must make the first sacrifice in spite of its vastly inferior position in the power balance). The uncritical reporting of this act of spin unbalances media coverage in favour of Israel.
This imbalance is present in the KBRM material. The persistent and crucial failure to convey the power differential between Israel and Palestine is unbalanced reportage, and yet is completely ignored by the scoring system used by KBRM. In fact, the KBRM and its scoring methodology implies that Israel and Palestine are at least equal in power, and that balance in reporting is presented as entirely a matter of “evening out A and B”. This implication is misleading.


Sides of the story
There is a trend in both Palestinian and Israeli camps to criticise media coverage. This discussion indicates part of why – different understandings of what constitutes balanced coverage.
Both sides can feel they are victims of unbalanced coverage; the Israel side, that its bad acts are mentioned much more prominently than the bad acts of the Palestinians; and the Palestinian side, that its true situation is not reported at all. Both concerns can be true at the same time.
The failures of the KBRM are many and serious. The website and scoring methodology reveal a lack of understanding of what constitutes balanced coverage; the clear bias on display further torpoedoes their work. The KBRM’s commentary only makes sense within a particular worldview; as a general outlet for mediawatching, the KBRM is simply not a credible source.


For the interested, more thoughts about Israel/Palestine can be found here, in an account of a trip to Israel and Palestine made by Cal and me three years ago. There are photos.