NAME

kirq - conduct qualitative comparative analysis

SYNOPSIS

kirq [--debug] [-y FILE] [-s FILE]

DESCRIPTION

Kirq is a crossplatform application for conducting qualitative comparative analysis (QCA).

OPTIONS

-y, --input=FILE

Read dataset from FILE; when FILE is -, read standard input

-s, --session=FILE

Read session from FILE

-g, --debug

Echo logging info to console

-h, --help

Display help and exit

Mandatory arguments to long options are also mandatory for short options.

USING KIRQ

There are three components to a QCA analysis: data set calibration, necessity analysis, and sufficiency analysis. Kirq facilitates the second and third of these but does not provide calibration procedures. Instead, you first calibrate your data set, using your preferred choice of spreadsheet or statistical software, and then import your calibrated data into Kirq for the analysis of necessary and sufficient conditions. (An Excel/LibreOffice macro, fuzz, that implements the "direct" method of calibrating inteval-ratio variables is available at http://www.grundrisse.org/qca/download/.)

Kirq's implementation of the QCA workflow centers around the concept of a "session." A session can include any number of analyses, on any number of different data sets (limited only by RAM and disk space). You may save your session to a file at any time and it's perfectly fine to have multiple session files. Any running instance of Kirq has one, and only one active session. However, you may start multiple instances of Kirq, with each instance running a separate session.

The Session Window

The session window is located to the right of Kirq's main window, and above the parameter specification window. The session window records each analysis that you conduct. By examining your session history, you can review your previous analyses and compare the results of different analyses to one another. You may rename any session item by double-clicking it. Right-clicking (or ctrl-clicking on OSX) brings up a context menu that allows you to annotate the item with comments for yourself, examine the item's lineage (including the parameters responsible for generating the session item), or delete the session item.

In QCA, it is not unusual for different parameter specifications to generate identical results. When reducing a truth table, for example, a more parsimonious solution is not always possible; sometimes, the "complex" solution is also the most parsimonious. Similarly, consistency thresholds that are close to each other (e.g., 0.90 versus 0.92) will often generate the same truth table. In such instances, Kirq does not generate a new session item but, instead, jumps the session highlight bar to the existing session item. This prevents the session window from accumulating a bunch of redundant entries; each session item is unique. By examining a session item's lineage, you can review each of the analyses (and their parameters) that generated that particular item.

Session items are organized in a series of tree-like structures, based upon the data set and outcome being analyzed. To facilitate comparisons, you can drag any session item out of Kirq and it will open in its own window.

Types of Objects

Kirq creates three types of objects that you will interact with: data sets, truth tables, and consistency/coverage tables.

Data Sets

Kirq creates data sets by importing data from an external file. What this means is that after you import your data, Kirq doesn't lock or hold open the external data file and that your session will be unaffected if you delete, rename, move, or change the external data file. Kirq always works on the cached, imported data. But this also means that if you modify the external data file in some way, these changes won't be propagated to Kirq automatically. Instead, you will need to reimport your data and rerun your analysis. (But see "BUGS," below.)

Kirq can import Excel files and plain text files. Most users manage their data sets using spreadsheet software such as Microsoft Excel or LibreOffice Calc, but as long as you save your data set in Excel or CSV format, Kirq will be able to read your file.

Please note that the way that the software identifies file types is very simplistic. If the file has an extension of ".xls", it's assumed to be saved in MS Excel 95/97/2000/XP/2003 format; if it has an extension of ".xlsx", it's assumed to be in MS Excel 2007/2010 format (aka OOXML or OpenXML). Otherwise, it's assumed to be plain text. When importing from an Excel file, Kirq reads only the first sheet of the file.

Data sets (whether in Excel or plain text format) must be in an "observations by variables" format, with the observations as rows and the variables as columns. The first column needs to be the names of of the observations and the first row needs to be the names of the variables. The observations column must have a header. There cannot be any blank cells. For example:

Obs,Var1,Var2,Var3
Amy,1,0,0
Sue,0,0.9,1
Tim,1,1,1

When the file is in plain text format, the software will attempt to guess the delimiter; you don't need to worry about specifying the delimiter and most of the time, everything should just work. One thing to note is that the software looks at the second row of the file (i.e., the first row of data) when guessing the delimiter. Looking at the second row helps to avoid confusion in case any of the column headers contain "delimiter-like" characters.

It's fine to mix calibrated and uncalibrated data in the same dataset. However, Kirq will raise an error if you try to run an analysis on uncalibrated data (e.g., values outside of the 0.0--1.0 interval).

Truth Tables

Truth tables are created as part of a sufficiency analysis. You may edit a truth table's outcome column by double-clicking on the cell that you want to edit; all other truth table columns are read-only. You may sort and filter truth table rows. By toggling the multi-sort toolbar button, you may sort by one column, then a second, then a third, and so forth.

Truth tables may be exported to plain text (whitespace delimited) by opening the File menu and selecting Export Table.

Consistency/Coverage Tables

Consistency/Coverage tables--concov tables, for short--are produced as a result of a necessity analysis or by reducing a truth table. Concov tables are read-only, but their rows can be dragged and rearranged to facilitate comparisions. You can also rearrange individual terms by double-clicking on the recipe.

Conducting a Necessity Analysis

Kirq includes automated necessity testing. Specify your outcome and select the causal conditions that you wish to test, then set your consistency and coverage thresholds and click "Analyze." Kirq produces a concov table that reports all causal combinations that are consistent with necessity.

Think of these terms as "candidate" necessary conditions. It is ultimately incumbent upon you, as the researcher, to determine whether it makes theoretical and substantive sense to conclude that one or more of these conditions are necessary for the outcome. To aid this process, Kirq lists the consistency and coverage scores of each term (sometimes called a "recipe"), along with the observations covered by the recipe. The solution row at the bottom of the screen lists the consistency and coverage score for the complete solution of all candidate recipes ANDed together.

Necessity Analysis Parameters

Consistency Threshold

The consistency value at or above which a combination of causal conditions will be considered consistent with necessity.

Coverage Threshold

The generated consistency/coverage table will include only those combinations of causal conditions that meet or exceed the specified coverage threshold. Combinations of causal conditions with coverage scores below this threshold are not included in the concov table.

Conducting a Sufficiency Analysis

A sufficiency analysis consists of two steps: converting the calibrated dataset into a truth table and reducing the truth table. To conduct a sufficiency analysis, specify the simplification level for your analysis (i.e., whether to reduce to primitive expressions, prime implicants, fs/QCA's complex solution, or fs/QCA's parsimonious solution) and your frequency, consistency, and consistency proportion thresholds.

Next, click "Truth Table" to generate a truth table and then "Reduce" to reduce the truth table to a sufficiency concov table. (You can also just click "Reduce" to generate the truth table and then automatically attempt to reduce it.) If, for any reason, the truth table cannot be reduced, Kirq will raise an error explaining why it can't be reduced.

Sufficiency Analysis Parameters

Frequency Threshold

The number of observations below which a truth table row will be classified as a remainder.

Simplification Parameter

Four simplification levels are available:

Note that Kirq does not provide an equivalent to fs/QCA's "intermediate" solution. See "Differences between Kirq and fs/QCA," below.

Consistency Threshold

The minimum consistency value required for a truth table row to be classified as consistent with sufficiency.

Proportion Threshold

The proportion of consistent:inconsistent observations required for a truth table row to be classified as consistent (or inconsistent) with sufficiency. The proportion threshold is used to implement and identify contradictions, as described in Rubinson's "Contradictions in fsQCA."

Computing and Specifying Truth Table Outcome Values

Kirq automatically calculates the values of the truth table's outcome column, based upon the parameters specified for the sufficiency analysis. You can manually override these values by double-clicking on the outcome cell. An outcome cell can take one of five values:

Differences between Kirq and fs/QCA

Kirq uses the same "truth table algorithm" as fs/QCA and which is described in Redesigning Social Inquiry (Ragin 2008). However, there are a handful of important differences between Kirq and fs/QCA:

BUGS

The current version of Kirq has a bug that affects how it handles modified data set files; under particular conditions, Kirq will read a previously-cached version of the data set, instead of the newly imported data set. This bug will be fixed in an upcoming release. For now, however, if you run an analysis and then modify its external data file in some way (e.g., by adding a column or recalibrating a variable), you should clear your session before reimporting your data and rerunning your analysis. (Actually, you don't need to clear your entire session. Simply deleting any session items for the dataset/outcome combination that you wish to analyze will be enough.)