Max Roßmann, Ph.D. Philosophy
M.A. Philosophy of Science & Technology
B.Sc. Chemical Technology

I drafted the following paragraphs to structure a discussions about the use of digital methods in TA, in January 2021. The following five paragraphs give an intuitive outlook on how digital methods could support Technology Assessment – in empirical social research, in the automation of work steps, and as a medium for exchange with stakeholders […]

Written by



Digital methods for Technology Assessment

I drafted the following paragraphs to structure a discussions about the use of digital methods in TA, in January 2021.

The following five paragraphs give an intuitive outlook on how digital methods could support Technology Assessment – in empirical social research, in the automation of work steps, and as a medium for exchange with stakeholders and the public. These distinctions, together with some practical examples and the two following paragraphs on necessary expertise and material resources, intend to open a discussion about possible project contexts and concrete options for action. I leave out TA’s vast engineering methods, such as LCAs & cost simulations, as they are not so particular to TA. This text is not based on extensive research or textbooks, but follows personal interest in projects such as [1] , applications, visions and own experiments.

Empiricism & Analysis: The „Computational Social Sciences“ (CSS), which have become established in recent years, distinguish between two areas of application of digital methods. On the one hand, it is about using computational methods (e.g. agent simulation) to explain social phenomena, on the other hand, it is about digital methods for empirical collection, analysis and presentation of data. The Internet provides access to various datasets and platforms, such as Scopus, OpenAlex, or Google Scholar for research publications, EPO for patent data, OSM for geospatial data, Yahoo Finance for economic data, MPwatch and Integritywatch for parliamentary voting behavior and lobbying, and MediaCloud or Genios and regarding archives for print media. Of particular interest for empirical social research are „traces left behind“ in social media to complement surveys, workshops and discourse analyses. For transparency reasons, it makes little sense to analyze the data manually or to only follow the social media discourse live. Instead, one uses so-called „APIs“ (application programming interface) to send queries and record data. For example, to analyze Twitter, one writes a small program, loads it onto a server (e.g., Amazon Webservices), filters by search queries or actors, and analyzes for influencers, trends, networks, and polarizations. In computational social sciences, a critical comparison with other media and data sources became part of a study’s validation. In addition, it is often worthwhile to „re-analyze“ or „re-mix“ old data sets (especially open data) under a new question, for example, to subsequently identify clusters or to detect trends in attitudes and behavior by comparing them over time.

About automation: While the last paragraph rather presented the access to different data sources, this paragraph is about possibilities to automate manual work steps. Automation already with the creation of a bibliography in, e.g.. Citavi or Zotero. Another example was my meta data converter to facilitate the entry of metadata to TATuP’s open journal system. Tools like Selenium allow to automate almost all clicks in the browser and search mask input to quickly retrieve, filter and summarize large amounts of data. More refined forms of webscraping make use of request & hidden-APIs. Figure 3 illustrates how we „digitally supported“ the literature search on 3D printing: The script searches a list of German media for keywords, like „3D printing“, saves the URLs, downloads all articles, searches them for keywords, isolates sentences with the keyword and removes stopwords („and“, „is“, „like“,…). This makes searching the corpus easier. Further insight into digitally assisted media research on 3D printing vs. blockchain and nuclear power and repositories is provided by Figure 4 and Figure 5. Since one is not limited to print media but can include social media, patents, market data, RSS feeds, e-maillists, and political & business websites, people use automation for horizon scannings or feed aggregated results back to social media with a (partially) automated bot [2] . Without relevant expertise and manual reading, word counts, trends, and non-special-trained sentiment analysis are, however, also more and more used as rheotorical props in prepresntations and lectures. In the end, such tools rather augment but do not replace the researcher.

As a medium: (Interactive) representations of data and data-driven storytelling promise new, highly underestimated communication possibilities – an alternative to the paper & TA report, if one disregards established system constraints. Since Corona at the latest, everyone knows interactive maps and case numbers. Further inspiration is offered by the platform „our world in data[1] “ – here one can find graphs, maps and datasets „to make progress against the world’s biggest problems“. Maps and various models can make data tangible or serve as platforms for collective exchange about a subject. VR models are possible here, but simple representations also serve this function. 3D printers can also rematerialize data models. For example, for a participatory process, population density was mapped over a 3D model of Mumbai (Figure 6). In TRANSENS, colleagues developed a digital map that incorporates relevant data sources and allows citizens to enter relevant information [2] . Other data scientists provide data and processed data so that journalists* and stakeholders can use them politically as Issue Advocates and Honest Brokers (cf. Figure 5). On the political dashboard, one can, for example, follow analyses Twitter and FB communication live [3] – similarly, a platform for live scanning of technology trends, for example, could provide researchers* with a platform. The UK Autonomy think tank offers a nifty visualization of covid-threatened jobs [4] . A code closer example is the notebook [5] on the covid vaccination process [6] – for laymen the most important information is understandable and experts can follow every step of the program code and reproduce and modify it if needed. Increasingly, the interplay of data and narratives (my Ph.D. topic) is becoming widespread in the data sciences and in journalism, e.g., on ([7], [8], [9] ). In the latter, it is the narrative that puts the heterogeneous and distant data into context and gives them meaning, just as the detective story gives meaning to the clues in a detective game. Nevertheless, one can zoom in, show different layers, and often trace or modify data sources and codes. In other words:

„Computers are good at consuming, producing and processing data. Humans, on the other hand, process the world through narratives. Thus, in order for data, and the computations that process and visualize that data, to be useful for humans, they must be embedded into a narrative – a computational narrative – that tells a story for a particular audience and context.“

(Perez & Granger, 2015) [10]

Necessary expertise: The last example leads on to what is probably the most important question: How much and what expertise/digital literacy is needed to apply digital methods? Just as with science communication, I do not think that every researcher should be trained in the same skill set. Instead, I suggest expert collaborations wherein digital methods adds another important skill for successful TA projects. If experts in digital methods work closely together with relevant TA experts, the time-consuming preparation of data sets and also the discussion and validation of other studies becomes much easier – e.g., to identify if anomalies, such as the peaks in Figure 1 or Figure 4 are to be understood as events rather than artifacts. Instead of relying on common language, I suggest implementing boundary objects [3] such as computational narratives and graphical user interfaces (GUI) Gephi, SPSS, or APIs[2] ). The search mask of our publication data entry, Manuel’s patent tool, or the TATuP converter[3] show, for example, how GUIs can make self-developed apps accessible for the use of colleagues. Python basics, which can be learned in a few days, are sufficient to modify and use ready-made scripts, e.g. on Google Collaboratory notebooks [4]. The more recent intriduced interactive widgeds allow using them without any programing skills. Based on that, one can deepen basics and „specialize“ for e.g. statistics and survey data, social media, data visualization, geo-data, simulation, natural language processing, or server & cloud application. In my opinion, using digital methods is possible without a computer science degree. However, a research group or institute should take a long-term, strategic position in terms of personnel and seek collaborations with networks such as or, and other experts in order to develop a TA toolbox and experiment what works best. Furthermore, one should exchange information about current developments and experiences in the NetzwerkTA and related communities – and develop tools and databases for joint use.

Material resources: I can give little information about material resources and expensive purchases. My experience with Digital Methods is largely limited to open programs & datasets, especially in the programming language „Python“. Open tools have the advantage that results can be replicated by other researchers and stakeholders and are therefore more suitable as common ground in policy issues. I use an Amazon Webservices (AWS) virtual machine as server (1 year free, then 8€ / month). Computing power, data speed and storage space can be scaled here as desired. The Python (and also the „R“) community is very active and develops many free libraries (collected code building blocks, e.g. on GIT) and tutorials, so that e.g. the commercial MatLab is slowly disappearing in science. Even pre-trained DeepLearning NLP algorithms, such as Google BERT, are free and free support can be found on StackOverflow for almost all questions. Since many OpenSource projects finance themselves by selling individualized solutions, institutes can also commission libraries, scripts and programs with GUI for money. Other cost items would be datasets and services (XaaS). However, the most important thing seems to me to be responsible scientific staff: An IT support or a kind of „translator: conventional TA to digital methods“ is in my opinion not enough to apply or develop digital methods in TA.

[1] – here, for example, a large overview. More examples follow in the text.

[1] A concise critique of „bot research“ by Florian Gallwitz

[3] Star, S. L., & Griesemer, J. R. (1989). Institutional Ecology, `Translations‘ and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39. Social Studies of Science, 19(3), 387-420.

[2] Ghazawneh, A., & Henfridsson, O. (2013). Balancing platform control and external contribution in third-party development: the boundary resources model. Information Systems Journal, 23(2), 173-192.

[3] Tatup Converter for XML import from Excel:

[4] Google Collabs: (but I think universities should host their own JupyterNotebooks /google.collabs)


[2] Interactive repository map (work in progress):

[3] Political Dashboard

[4] The Jobs at Risk Index (JARI):

[5] Project Jupyter Manifesto: Computational Narratives as the Engine of Collaborative Data Science:

[6] Covid Vaccination Progress Notebook:

[7] Data Journalism / Narratives on

[8] Classics in computational story-telling:

[9] Mumbai GIS / URDI:

[10] Project Jupyter: Computational Narratives as the Engine of Collaborative Data Science