After collecting the survey data described in Chapter 6, and completing my doctoral thesis, I considered never publishing my statistical analyses of public innovation in ocean science instruments. The results sat in a metaphorical ‘file drawer’ for years after my PhD defence. But this was not the well-known ‘file drawer problem’ where science is skewed by the suppression of insignificant results (Rosenthal, 1979). Instead, I sat on these statistics because I had come to hate them. I had enjoyed producing them; the work was challenging and stimulating, not routine or ‘banal’ (Lippert and Verran, 2018). And I was passionate about the point these numbers make – serving as ‘evidence’ of public innovation in goods and ‘proof’ of poor public policy in the place where I live. However, statistical conventions would prove antagonistic towards this passion and politics.
It had seemed that statistical evidence was needed to shift public policy. One of my system experts had told me that the recent government cuts to ocean science, especially the cuts at BIO, would be devastating for Nova Scotia. This expert said the most important contribution I might make would be to show this in numbers. And so, I embarked on the fool’s errand of trying to debunk neoliberal dogma with statistics. Following convention, I wrapped the statistics in the trappings of rationality and objectivity. I found that the numbers could only thrive if they appeared apolitical. But I also found that depoliticizing the statistics made them trivial.
Like Helen Verran, in the first iteration of her book Science and an African Logic, I found that my numbers work ‘failed to deliver a useful critique’ (Verran, 2001, p 20). And so, this chapter deconstructs the tools and techniques of statistical analysis in innovation studies. I describe statistical analyses, but my analytic tool is autoethnography (Ellis, 2004; Prasad, 2019). This chapter is an inquiry into my own experiences (auto-) navigating the culture of innovation statistics (-ethno-). The story (-graphy) is Sisyphean. I will suggest that following convention is like being condemned by the statistical gods to push numbers up a hill, hoping to successfully reach the summit, only to realize that the effort was ultimately meaningless.
Of course, the problems embedded in standardized innovation statistics are already a major concern for the field. Gault (2018, 2020), Godin (2002, 2005), and Perani (2019, 2021) have all examined the sociopolitical processes that shaped standardized innovation statistics and statistical manuals. Gault (2012, 2018, 2020) has been arguing for over a decade that standard statistical methods account for only a small portion of innovation activity and must be expanded beyond an exclusive focus on business. He points out that while innovation is broadly defined in the most recent edition of the OECD-Eurostat Oslo Manual, that definition is promptly ‘put to one side to get on with innovation in the business sector’ (Gault, 2020, p 102). And so, it is well established – by multiple scholars – that innovation statistics carry neoliberal politics. Here, I move from that historiographic style of number study to an ethnographic one (for this distinction, see Lippert, 2018, p 74, note 1). I consider how politics (and depoliticization) are enacted in the everyday use of conventional statistical tools and techniques.
As is the norm, I begin by presenting my descriptive statistics. Here we will see evidence that falsifies any claims against the existence of public
But first, let the statistical speak begin …
Descriptive statistics
Results
Following the survey methods described in Chapter 6, I produced a dataset covering 27 organizations engaged in the production and use of ocean science instrumentalities in Nova Scotia, Canada. This included 12 scientific instrumentality companies, ten PROs, and five public support organizations. Table 4 provides a summary of the product and process innovations reported by the 25 organizations where a key informant participated in the study.
Product and process innovations in Nova Scotia’s ocean science instrumentalities innovation system
PROs | Companies | Support organizations | Total | ||
---|---|---|---|---|---|
Number of organizations | 10 | 12 | 5 | 25 | |
Employees (full-time equivalents) | 1,281 | 474 | 28 | 1,783 | |
R&D intensity1 | 46% | 41% | 16% | 44% | |
Product innovations2 | |||||
Percentage of organizations that produced: | |||||
instruments, machinery or equipment | 89% | 100% | 20% | 80% | |
reports, information, documents, or manuscripts | 100% | 45% | 80% | 72% | |
computer software or datasets | 78% | 73% | 60% | 72% | |
education, training, or professional development | 89% | 73% | 100% | 84% | |
data collection, processing, or analysis services | 100% | 45% | 60% | 68% | |
Percentage of organizations introducing products that were: | |||||
new to the organization | 78% | 73% | 100% | 80% | |
new to the field, sector, or market | 89% | 73% | 60% | 76% | |
new to the world | 100% | 82% | 60% | 84% | |
Process innovations2 | |||||
Percentage of organizations that introduced new: | |||||
techniques or methods | 100% | 100% | 80% | 96% | |
machinery or equipment | 100% | 73% | 80% | 84% | |
software | 100% | 91% | 80% | 92% | |
Percentage of organizations introducing processes that were: | |||||
new to the organization | 89% | 82% | 60% | 80% | |
new to the field, sector, or market | 89% | 64% | 60% | 72% | |
new to the world | 100% | 36% | 20% | 56% |
All participating organizations were involved in the production of novel outputs (that is, outputs that were new to the world or new to their field, sector, or market) and had incorporated some process innovations over the past five years. Indeed, R&D intensity was high throughout the network: 44 per cent of the 1,783 employees were dedicated to research and/or development activities. The average R&D intensity of public support organizations was lower (16 per cent) than the R&D intensity of PROs (46 per cent) and companies (41 per cent).
I asked respondents to indicate the types of outputs produced by their organization over the past five years. All five product types were reported by a majority of respondents. This included ‘instruments, machinery, and equipment’ which were produced by 20 of the 25 responding organizations. It is interesting that all the companies, eight of the PROs, and one of the public support organizations engaged in the production of instruments, machinery, or equipment. Novelty levels were also high across all three types of organizations. All the PROs, nine of the companies, and three of the public support organizations reported introducing goods or services that were ‘new to the world’ over the past five years.
The types of innovation and innovation novelty levels reported here confirm the high levels of innovation activity in this network. It is particularly important to note that PROs and public support organizations in this network all reported high levels of R&D intensity, product innovation, and process innovation. Most interestingly, innovative goods – instruments, machinery, or equipment – were produced, over the previous five years, by nine of the 14 public organizations in this study. Note that this finding alone runs counter to the widespread assumption – discussed in Chapter 1 – that innovation in goods is the exclusive domain of the private sector. These results are therefore revelatory in that they confirm the production of innovative technological goods by public organizations.
Significance?
‘Revelatory’ – what an understatement. I want to shout from the rafters about the importance of these numbers. They prove the existence of public innovation in goods! They contradict a widely held position about public sector innovation. And so, I think these numbers warrant a few adjectives. They deserve to have some rhetorical embellishment. They might even deserve to be described as ‘highly significant’. But these words are policed in statistical discourse. Results can be significant or not. No descriptive adjectives are allowed. And the word ‘significant’ must be accompanied by a p-value. It cannot be used around purely descriptive counting. This means that my descriptive statistics lack any real description; they are merely a preamble to the statistical tests that will establish mathematical significance. At least that is the convention.
But surely positivist scholars still accept that even one observation of a ‘black swan’ will falsify a theory like ‘all swans are white’. And make no mistake, this is the style of claim I am refuting with my descriptive data. Remember: ‘technological innovations, especially goods, are the exclusive domain of the private sector’ (Windrum and Koch, 2008, p 239, emphasis added). In The Logic of Scientific Discovery, Karl Popper railed against this kind of inductive ‘all statement’ (Popper, 2005, p 82). He was using an old metaphor (Taleb, 2010) – and doing so in a footnote, but his argument about black swans and falsifiability is legendary. In one of his appendices, Popper went on to argue that there is no need for probabilities (p-values) when testing statements with such certainty (Popper, 2005, p 378). Almost every
Now that I have said this, I worry that the black swan metaphor muddies the waters. It implies that I am describing rare outliers – and so, it allows for dismissiveness. You see, an increasingly common use of the metaphor comes from Nassim Nicholas Taleb (2010). He capitalizes it as ‘Black Swan’ – to describe rare, high-impact, unpredictable events. But I was observing some things closer to black elephants than Black Swans. A black elephant is a phenomenon that ‘either no one can see or chooses to ignore. Or, if its presence is recognized, no one is actually able to tackle it’ (Sardar and Sweeney, 2016, p 9). Black elephants have high predictability, and yet they are often passed off as rare and random events (Gupta, 2009). And so, I cannot let the black swan metaphor go too far. There are nine public organizations in my dataset that produced new instruments, machinery, or equipment in the preceding five years. They are elephants in the room. It is not hard to predict their presence, but they are concealed by conventional wisdom, political belief, and measurement techniques. The mundane discourse and unreflexive standards of descriptive statistics makes them all too easy to ignore. And so, in the next section, I cave in to convention and start producing some p-values.
Locus of innovation
Results
As was noted in Chapter 2, prior research demonstrated that scientists, rather than private companies, are the locus of innovation for scientific instruments (von Hippel, 1976, 1988; Spital, 1979; Riggs and von Hippel, 1994). It follows that PROs – organizations that employ scientists and use scientific instrumentalities – will be the locus of innovation for a scientific instrumentalities innovation system. If we conceive of an innovation system as containing an interactive learning network, then we can use network analysis to assess the relative importance of different network positions.
The measure ‘degree centrality’ is typically interpreted as representing a node’s importance or influence in a network (Borgatti et al, 2013). In one
I conducted a quadratic assignment procedure (QAP) (Hubert, 1987; Krackhardt, 1988; Martin, 1999) t-test to compare the degree centrality of public research organizations with the degree centrality of other organizations in the network: private companies and public support organizations. QAP is considered superior to ordinary linear regression for network analysis (Krackhardt, 1988). This resampling process takes observed data and randomly re-arranges the rows and columns of a dependent variable matrix. The relational structure of the dependent matrix is preserved, but it is no longer related to the independent variable matrix because observations have been reassigned to different nodes. This approach can be used to create a collection of observations that could have occurred at random. Properties of the observed data can then be compared against the properties of several thousand random permutations. The result of QAP is a permutation distribution that allows network analysis software to evaluate the statistical significance of observations: calculating the percent of random permutations that yield values greater or less than the observed values.
Based on the QAP t-test, the degree centrality scores for PROs (M = 18.20, SD = 3.37) were not significantly higher than the degree centrality scores for other organizations in the network (M = 15.65, SD = 5.34); t(25) = 2.55, p = 0.11. Hypothesis H1 was not supported. This result suggests that the slightly higher average degree centrality for PROs in this network could occur at random: a similar difference in means occurred in 11 per cent of 10,000 random permutations of the observed data.
In interpreting this result, it is important to note that the hypothesis was drawn from a literature on scientific instrumentality innovation that does not discuss public support organizations (see von Hippel, 1976, 1988; Spital, 1979; de Solla Price, 1984; Kline, 1985; Kline and Rosenberg, 1986; Rosenberg, 1992; Riggs and von Hippel, 1994; Gorm Hansen, 2011). Prior studies of scientific instrument innovation examined the relative importance of only two roles: ‘users’ and ‘producers’ (von Hippel, 1976, 1988; Spital, 1979; Riggs and von Hippel, 1994). These studies did not include any individuals or organizations that were similar to the public support organizations in Nova Scotia’s ocean science instrumentality innovation system. It is possible that similar public support organizations did not exist at the time or in the
To further understand the impact of public support organizations on my results for H1, I conducted a post hoc hypothesis test (H1b). If this study had used a data sampling approach, post hoc hypothesis testing using classical statistical tests would be problematic; there would be a high risk of a type 1 error. However, there are fundamental differences between the assumptions underlying classical statistical tests of sample data and the assumptions underlying QAP hypothesis tests of whole network data (Krackhardt, 1988; Dekker et al, 2007; Borgatti et al, 2013). It is appropriate to state and test post hoc hypotheses in this study because the dataset includes the whole network population – not a sample, and because the significance of each result is evaluated using a new, randomly generated distribution of permuted observations – rather than an assumed normal distribution. Under these network analysis conditions, it is normal and appropriate to conduct post hoc tests (for example, Kilduff, 1992; Grosser et al, 2010; Soltis, 2012; Lopez-Kidwell, 2013; Tang et al, 2014) and to undertake exploratory data analysis (for example, Butts, 2008; de Nooy et al, 2011; Borgatti et al, 2013).
My post hoc hypothesis (H1b) was that public organizations have significantly greater average degree centrality than private companies in this network. I conducted a QAP t-test to compare the mean degree centrality of public organizations – PROs and support organizations – with the mean degree centrality of private companies. I found that the degree centrality scores for public organizations (M = 18.47, SD = 2.87) were significantly higher than the degree centrality scores for private companies (M = 14.25, SD = 5.75) in this network: t(25) = 4.22, p = 0.02. The post hoc hypothesis (H1b) was supported. This could suggest that public organizations – PROs and support organizations – are more important than private companies in the interactive learning network. The relatively lower degree centrality scores for private companies in this network is consistent with prior conclusions that private manufacturers are less important – not the ‘locus’ – for scientific instrument innovation (von Hippel, 1976, 1988; Spital, 1979; Riggs and von Hippel, 1994). The highest degree scores in this network are found among a combination of public organizations, including both PROs and public support organizations. This may suggest that public support organizations are an important extension of the scientific enterprise, even if their employees do not directly perform scientific investigations.
Because degree centrality is a common proxy for importance in a network (Gay and Dousset, 2005; Takeda et al, 2008; Borgatti et al, 2013), the foregoing is a common interpretation of differences in degree centrality. However, there is an alternative explanation that cannot be discounted: higher degree centrality scores could also suggest that public organizations in this
Linearity?
I conducted a multiple network regression to predict degree centrality from public/private organizational status, organizational age, size (in full-time employees), and R&D intensity. These variables did not significantly predict degree centrality, F(4, 27) = 2.30, p = .091, R2 = .17. Only public/private organization status added significantly to the prediction, p = .03.
However, this paragraph and these results stayed in the ‘pocket slides’ at my thesis defence. They were not requested and so I did not present them. Questions about control variables had come up in a practice session, but not on the big day.
Overall, I was floored by how little reaction these ‘locus of innovation’ statistics produced. I did all this work and yet no one stopped me to say: ‘Wait, why are you testing a linear model hypothesis and using linear statistics?’ After all, I had parroted the argument that the linear model of innovation is out of date. This seems to confirm that Benoît Godin was right: the linear model of innovation persists today because it is ‘entrenched’ in statistics (2017, p 78). Standard innovation survey methods still carry linear model assumptions (Godin, 2017). Collecting data in this way makes it possible to perform linear statistical tests. And because such linear statistics are so
Chain links
Results
The literature on scientific instrumentality innovation discusses symbiotic relationships between those who produce science and those who produce scientific instrumentalities (de Solla Price, 1984; Rosenberg, 1992; Gorm Hansen, 2011). Therefore, relationships between PROs and instrumentality companies should include multiple concurrent types of interactive learning with knowledge flows in both directions. In network analysis terms, this means the relations should be multiplex and bidirectional. Stated as a hypothesis (H2), this means that within a scientific instrumentalities interactive learning network, relations between PROs and private companies are multiplex and bidirectional.
The ocean science instrumentality organizations I surveyed in Nova Scotia had a network density of 0.64, indicating that 64 per cent of the possible relations between any two organizations were present. Out of the 702 possible relations in this network, there are 240 possible relations between PROs and instrumentality companies. Interactions were reported for 124 of these dyadic pairs. Seventy-four of these interactions were multiplex. Ninety-two relations were bidirectional. Seventy relations were both multiplex and bidirectional.
I calculated a Jaccard similarity coefficient to assess the degree to which the set of relationships between PROs and instrumentality companies intersected with the set of multiplex and bidirectional PRO-company relations. For this test, the Jaccard coefficient was more appropriate than a Pearson correlation coefficient because the data are binary (Hanneman and Riddle, 2005). The Jaccard coefficient is an index of the similarity between two sets of binary values. The hypothesis was focused on the composition of PRO-company relations, so the test was conducted using only the data on PRO-company dyads. In other words, support organizations were not included in this analysis, nor were PRO-PRO and Company-Company relations. The results of the test were assessed for significance using the QAP with 10,000 permutations. The distribution of similarities for the 10,000 random permutations ranged from 4 per cent to 54 per cent (M = 23.2 per cent SD = 6.3 per cent). I found a significant similarity between the two sets of relations: J = 0.56, n = 124, p < 0.001. The majority (56 per cent) of observed relationships between PROs and instrumentality companies were multiplex and bidirectional. Hypothesis H2 was supported. This result
Triviality?
‘This result affirms prior discussion’ – what a feeble attempt to justify inane details. In my thesis, I went even further. I dedicated pages and pages of analytical discussion to showing that the transfer of equipment and technical services is a critical ‘channel’ of interactive learning between PROs and private instrumentality companies. But none of the numbers in the previous section or in the thesis add substantively to our understanding of scientific instrumentality innovation. This is merely quantification of insights that were established many years ago. Yet, the numbers seem to add value. They suggest greater rigour than the previous qualitative studies. The numbers suggest greater certainty. They seem more definitive. But make no mistake: there is nothing innovative about these innovation statistics. They are an extraordinarily incremental contribution. They are rigour to the point of rigor mortis.
Why couldn’t I admit that these results are trivial? Because for me – their author – these numbers were both a fait accompli and a major feat. I knew what they would say. But I was also tremendously proud to have produced them. These data represent months of effort. It was like solving a complex puzzle: finishing it made me feel clever and accomplished. I impressed myself and I hoped this work might also impress others. I felt like a real social scientist because I was able to produce really complex statistics. But in so doing, I made the results inaccessible to anyone who might use them for shaping policy or practice.
Policy makers would be better advised to read one of the qualitative studies anyway. De Solla Price (1984) doesn’t bore you with unintelligible mathematics. But short of the numbers, work like his feels less certain, less dependable. Ironically, policy makers are more likely to respect my quantification, but less likely to understand it. I made these ideas trivial through mathematics. This is not unlike Saifer and Dacin’s observation that ‘the overproduction of data doesn’t lead to more knowledge, but rather greater levels of organizational ignorance’ (2021, p 627). In the 1960s, Ernest Becker warned that research was ‘becoming mired in data and devoted to triviality’ (Becker, 1968, p xiii). More recently, leading autoethnography scholar Art Bochner has warned that this ‘devotion to triviality can lead to alienation’ (2016, p 51). Nonetheless, I will now try to eke out some meaningful impact. I now turn to a statistical test that aims to mirror a real-world innovation system dynamic (but let’s not forget to mute the politics).
System dynamics
Results
In Chapter 3, I described a major innovation system dynamic that occurred five years before my data collection: substantive reductions in funding for public science across the country, and particularly in ocean science (Turner, 2013). This was part of a broader decline in public science globally that will have ‘long term adverse consequences’ (Archibugi and Filippetti, 2018, p 108) for innovation and development. Here in Nova Scotia, reduced funding for ocean science stood in contrast to increased emphasis on ocean technology development. Indeed, my stories in Chapter 4 suggested that Nova Scotia’s ocean science and technology innovation system might be structurally dependent on public research organizations as its ‘anchor tenants’ (Agrawal and Cockburn, 2003; Niosi and Zhegu, 2005; Niosi and Zhegu, 2010).
In graph theory, the structural dependence of a network on certain nodes is referred to as ‘robustness’ (Callaway et al, 2000; Barabási, 2013). A network’s robustness is a function of how well it remains connected when individual nodes or edges are removed (Borgatti et al, 2013). A network is said to be highly robust when a large number of nodes or edges need to be removed before the network begins to fragment into many small components (Borgatti et al, 2013). Robustness has mostly been qualitatively explored in innovation studies. Some have suggested that Silicon Valley’s present-day innovation system is highly susceptible – not robust – to the loss of venture capital firms (Ferrary and Granovetter, 2009). Others have suggested that Boston’s biotech innovation system was not robust to the removal of PROs in the late 1980s (Powell et al, 2012).
The dynamic effect underlying network robustness is fragmentation. In a network with no fragmentation, all nodes are members of one component – no individual nodes are isolated from the group, and no small groups of nodes are disconnected from the main component. When there is no fragmentation present, any node in a network can reach any other node by working through its neighbours. For an innovation system network, this could mean that knowledge and learning can flow efficiently and effectively.
Stephen Borgatti (2006) identified several ways to measure network fragmentation. For all these measures, a network becomes fully fragmented (F = 1) when all nodes are disconnected from one another. Fragmentation measures differ in the ways that they account for degrees of fragmentation. The simplest approach is to count the number of components – or groups of nodes – in a network and then divide them by the total number of nodes. Using this measurement technique, Calignano, Fitjar, and Kogler (2018) observed that the aerospace cluster in Apulia, Italy was highly fragmented in a static sense. The whole network’s degree of fragmentation was measured
Here, i and j are nodes in a network, dij is the geodesic distance between those nodes, and n is the total number of nodes in the network. The numerator incorporates a reciprocal of the distance between nodes. For nodes that cannot reach one another – in other words, distance is infinite – the reciprocal distance is zero. Distance-weighted fragmentation has a lower limit of zero when every pair of nodes is adjacent to every other pair. It has an upper limit where every node is an isolate. For my purposes, distance-weighted fragmentation is useful because it can be a node-level measure: the change in DF of the network can be calculated after removal of any individual node. This concept of distance-weighted fragmentation allows me to hypothesize (H3) that: removing individual PROs from a scientific instrumentalities interactive learning network results in significantly greater distance-weighted fragmentation than removing other types of organizations.
I conducted a QAP t-test to compare the mean change in DF after removal of a PRO with the mean change in DF after removal of other organizations in the network (private companies and support organizations). The fragmentation scores for PROs (M=0.002, SD=0.005) were not significantly greater than the fragmentation scores for other organizations in the network (M=–0.001, SD=0.008): t(25) = 0.004, p = 0.11. Hypothesis H3 was not supported. This result suggests that the larger average fragmentation scores that were
The result for this test is like the result for the test of degree centrality scores (H1). As with hypothesis H1, I formed a post hoc hypothesis to account for the presence of public support organizations in the data (H3b): removing individual public organizations from a scientific instrumentalities interactive learning network results in significantly greater distance-weighted fragmentation than removing private companies.
I conducted a second QAP t-test to compare the mean fragmentation scores for public organizations (PROs and public support organizations) with those for private companies. The fragmentation scores for public organizations (M=0.003, SD=0.004) were significant greater than the fragmentation scores for private companies (M=–0.004, SD=0.009): t(25) = 0.006, p = 0.013. The post hoc hypothesis (H3b) was supported. This result suggests that, on average, this innovation system would become more fragmented following the loss of a public organization than it would become following the loss of a private company.
Attack!
Convention is clearly the enemy of antagonism. Here, as in my PhD thesis, I have provided a conventional description of my system fragmentation analysis. In fact, the four results sections in this chapter all follow the conventions for presenting statistical results set forth by the APA. Although I used UCInet to produce the statistics, I paid for a subscription to Laerd Statistics and followed its templates for converting statistical results from SPSS (Statistical Package for the Social Sciences) into the writing style required by the American Psychological Association (APA). To write the ‘I conducted …’ paragraphs in this chapter, I simply filled in the blanks in the relevant templates. The outcome of these templates is predictable. It is an understated, technocratic description of a relatively complex statistical analysis. The writing conventions give a sense of rationality and objectivity. They depoliticize the discussion. And yet, I was trying to mount a major counteroffensive in the Canadian ‘War on Science’ (Turner, 2013).
In the final months of my PhD studies, it became clear that these statistical results were acceptable, but their politics were not. I will not recount the micropolitics that played out. But the big ‘P’ politics are critical to my arguments in this book. In presenting the analysis in the previous section, I dropped any sense of its political motivation. Following that convention was necessary to complete my PhD. But Godin (2005) demonstrates that statistics on science and technology are first political, before they are ever (re)presented as objective. Once wrapped in ‘the optics
Five years ago, substantial federal cuts were made in ocean science across Canada (Bailey et al, 2016; Turner, 2013) at the same time as regional policy networks were prioritizing investments in ocean technology innovation via industrial policy (Government of Nova Scotia, 2012; Greater Halifax Partnership, 2012). Ocean science and ocean industry policies were moving in opposite directions. My results suggest that this disconnect may have been problematic because, in the interactive learning network that I observed, the loss of a public organization would cause greater fragmentation to the network – on average – than the loss of a private company. This suggests that the innovation system may be structurally dependent upon public organizations. Furthermore, I found that the majority of interactive learning relationships between PROs and private companies in this network were symbiotic. This suggests that it may be important to connect public policies in support of private companies in this system (i.e., industrial policies) with policies that affect PROs and public support organizations (i.e., science policies).
Notice the muted phrases like ‘may have been problematic’ and ‘this suggests’. This soft language leaves room for neoliberalization: these results can be read as an indication that ocean technology innovation in Nova Scotia should become less dependent upon public organizations. My argument would have been the opposite: ocean science is a fundamental public good.
Significant but meaningless
In this chapter, I produced and critiqued four sets of statistics corresponding to three perspectives in innovation theory. First, I presented descriptive statistics that should have been sufficient evidence of the ‘black elephant’ that is public innovation in goods. But alas, this counting was not statistically ‘significant’. Next, I tested insights from linear model studies of scientific instrument innovation. I found that the locus of scientific instrumentality innovation rests with public sector organizations. However, my representative – but fictional – audience got caught up in a critique of the number-crunching details. It was less obvious that the whole exercise was stuck in a vicious cycle of linear assumptions, models, and statistics. Then, I tested old insights about the symbiotic and interactive relations between scientists and instrument manufacturers. I felt clever in enumerating those old insights. The results appeared more dependable than past research, but were completely trivial. Finally, I strived for ‘real world’ impact by mathematically testing a misguided public policy. The numbers supported my position, but that position was undermined by representational conventions. At each of these three stages, I was pushed forward by enthusiasm, optimism, and the intellectual challenge. Then, when the work was done, I was deflated by anger, frustration, and disappointment. This is how statistics held their sway for so long in my life. These tools help me feel clever, accomplished, and accepted (at the disciplinary ‘convention’ – or ‘gathering’). I kept returning to statistics for these reasons and they kept letting me down. Each time I write statistical results, their incremental futility surfaces. These are the moments when the whole statistical exercise feels Sisyphean. These are the moments when the statistical work – work that is so valued in innovation studies – is revealed to be significant, but meaningless.
Sisyphus, proletarian of the gods, powerless and rebellious, knows the whole extent of his wretched condition; it is what he thinks of during his descent. The lucidity that was to constitute his torture at the same time crowns his victory. There is no fate that cannot be surmounted by scorn. (Camus, 1955, p 109)
Here, Camus is arguing that we cannot find meaning in life by pretending that all will be well tomorrow or by hoping that some God will eventually save us. Nor can we simply give up: in the absurdity of life, ‘suicide is not legitimate’ (Camus, 1955, p 8). Instead, Camus provides a book-length argument for consciousness and ‘revolt’ (see especially, Camus, 1955, pp 53–5). And in my own little way, I have carried that thought into the realm of innovation statistics.
Rather than giving up on statistics, I have embraced the absurdity of the exercise. I have not retreated from this absurdity through any ‘philosophical suicide’ (Camus, 1955, p 32) – that is, the kind of escape to certainty where I might place my faith in some other universal ideal. That would be no less absurd than placing my faith in positivism, neoliberalism, or numbers. After all, statistics are a ‘desecularized’ religion – part of the scientific substitute for God (Gephart, 2006, p 426). This chapter has been a smirk at the absurdity of the statistical religion – an affirmation of my own experience, voice, and freedom. Importantly, I have not claimed that the numbers I produced were disconnected from reality – that is the kind of ‘absurdity’ that logical positivist philosophers get worked up over. Rather, I have tried to share my experience of personal alienation-through-statistics. I hope that this has ‘resonance’ (Ellis, 2004, p 22) for other recovering positivists who are similarly tired of fragmenting their identities to please the statistical gods. We need not succumb to this discipline.
Autoethnography helped me work through the alienation of statistics and produce a contribution here that I consider meaningful. Making meaning for self and others is the whole point of autoethnography. Art Bochner has described it as ‘an expression of the desire to turn social science inquiry into a non-alienating practice’ (2016, p 53). In this way, ‘it’s a response to an
I submit that this is the value of fusing autoethnography and ethnostatistics. This fusion can help us create (our own) meaning from ‘inside’ statistical tools, techniques, and practices. This idiographic and ‘situated’ meaning is a revolt against the absurd. In response to Lippert’s call for more ‘tools to open up numbers and calculations’ (2018, p 53), my autoethnography of statistics is also an alternative and/or addendum to Gephart’s (1988) ethnostatistics, Callon and Law’s (2005) qualculation, Verran’s (2001) ‘ontologizing troubles’ (Lippert, 2018), and B. T. Lawson’s (2023) ‘life of a number approach’. These and many forms of ‘number study’ involve the analysis of other people’s enumeration – assessing the counting within other people’s knowledge claims. Verran (2001) seeks some redress from this when she decomposes her own analysis. But the ‘auto-’ that I have invoked in this chapter precludes us from taking any God-like position in the first place – it inhibits what Haraway (1988) called ‘the God trick’. Instead, autoethnography pushes us through the discomfort of our own experiences and demands radically reflexive authenticity (Ellis, 2004). In my next and final chapter, I will argue that this kind of reflexivity must be the container for any dark innovation toolkit.