In the dark of night they worked, cobbling together performance data, considering delegate importance and personalities, and calling upon their extensive experience as light entertainment producers to create something magical. And in the end they produced this:
So that’s it. Signed, sealed, delivered, short of a late withdrawal or a disqualification. We have our performance order. And it’s all wrong.
No, not wrong in terms of how it balances different genres, or accommodates operational requirements (viz. entries require a bit more setup time etc). It’s wrong because once again (beginning in 2013 in Malmo) the performance sequence is being driven by a handful of people, some of whom have interests in the show’s outcome.
If we still had a wholly random draw, no one could subsequently argue that some delegations are being supported at the expense of others. Granted, the host’s slot is always done as a random draw (slot 9 for Frans and If I Were Sorry, if you were wondering).
The performance order is what it is. The questions to be answered are:
- What can we glean from the sequencing, in terms of possible semi-final performance?
- Which entries might have been written off?
- Which delegations should be pleased?
We’ve taken a look at where the top 3 from each of a year’s semi-final qualifiers (that’s 2x 3, since there’s two semi-finals every year) to see where they’re placed. Each does draw for first half or second half randomly, but within each half the producers assign entries slots.
We took the top 3 semi-finalists from the 2013, 2014 and 2015 Contests: the producer-led performance started in 2013. The winner of each year’s Grand Final is in bold. We have deleted all the slots that have not yet ever been assigned to these top tier entries.
The producers have used 14 of 27 slots available across these three years, which means the top three qualifiers have been surprisingly spread out across the performance sequence. Of these entries, all but one—Romania’s 2014 comeback entry, Miracle by Paula Seling and Ovi—finished in the top 10.
What is striking is that the producers have doubled up four times, at places 10, 18, 21 and 24. That implies these are slots for songs that are contenders to win. And just under half, 3 of 8 or 37.5 per cent ended up in the top 3 from these slots. With 40 songs competing, that’s a good result.
This is why entries position in the Grand Final performance sequence is important—and perhaps predictive. They are the only entries for which the producers have any substantive data about popularity; in fact, they have data about their relative popularity with jurors and televoters. However, these are only about half the televote and juror populations from that given year: half of “Europe” votes in either of two semi-finals (Australia in 2015 being an exception – they voted in both semi-finals and the Grand Final).
There is one significant limitation here: the producers have little or no data about the pre-qualified countries. They could, theoretically, look at YouTube counts, chart performance, download sales or streaming plays—except often these are skewed towards the artist’s home country. Data that are rarely generalisable across the viewing area.
In terms of challenging for the championship, we might assume that the early performers—Belgium, Czechia and the Netherlands—perhaps were in the 6-10 qualification rankings for their semi-finals. As might have been Israel and Bulgaria, who are performing 7th and 8th. The 9th draw would also be seen this way, but for 2016 it was randomly assigned to hosts Sweden.
Similarly, we find Cyprus, Serbia, Lithuania and Croatia in our next gap sequence, 14th through 17th. The rest of the sequence has several stand-alone “never used” slots, 12, 23, and 26 (slot 27 was a one-off last year). But if I were Poland, Georgia, and Armenia, I would not read too much into their assignments. They’re surrounded by entries that might have been top five in their semis. Whereas, Big 5 UK might be the “filler” in the latter part of the show.
So who should be pleased? In the ostensibly highest profile slots we find:
Germany is a bit of a surprise. Ghost was a relatively minor hit in the German domestic market, and made a tiny ripple in the Austrian and Swiss charts. Meanwhile Austria has been given a prime slot—could Zoë have finished in the top three of her semi-final?
The ones that should be most pleased are Russia and Ukraine. These were considered desirable slots when random draws were used to assign performance slots. Granted there might be a producer-driven decision to try and push the “Russia versus Ukraine” narrative through the sequence.
Three of these entries are starting to pick up on the iTunes charts across Europe: Germany is the exception, but Jamie-Lee also hasn’t yet been performed Ghost in its full version for the viewing public on the Stockholm stage.
Next we have entries in those slots that have sometimes done well:
The positioning of these could be interpreted several ways. First, none of them were wholly buried in a low-hope slots discussed above. Australia, Latvia and Malta might have been in the top half of the semi-final qualifiers. They also might have had a skewed score—high overall, but dependent more on either jurors or the public, which will quite probably flatten out when there are 26 almost all very good entries in the mix.
For France and Spain, we are inclined to interpret their placements as belief in the entry by the producers that these entries are contenders for a good (top 10) result. Which leaves a question…
This year all of the prequalified entries performed their entire entries during the jury rehearsal in either semi-final. In addition to give the artists a chance to build some moment prior to the Grand Final—after all, only two winners since 2004 had been pre-qualified for that year’s Grand Final (Greece 2005 and Germany 2010). It also gave ticketholders for those shows an added value experience.
But think about it for a moment. You’ve set up a whole show for the jurors to view and vote. You’ve asked three pre-qualified entries to perform their entries as if it mattered. Might the producers have asked the juries to do a sort of formative assessment of these entries, and shared (rather than submitted) them?
There are a few reasons to do this. First, it will give the producers some actual data about how the versions staged for Stockholm might do on Saturday night. That helps with the performance sequence and the voting sequence. Second, it requires the jurors to think about these entries substantively. That means these entries will be on a more even footing on Saturday night: like the qualified entries they will have been vetted by half the jurors of the viewing area.
If the EBU didn’t do this, perhaps do it for 2017?