Tool: Quick statistics from the command line
gravatar for Kamil
5.2 years ago by
Kamil2.0k wrote:

This is a list of some popular tools for quickly computing statistics on the command-line.

For example, here's what you can do with GNU Datamash on the command-line:

# Example: calculate the sum and mean of values 1 to 10:
$ seq 10 | datamash sum 1 mean 1
55 5.5



    • Quick and dirty statistics tool for the UNIX pipeline
    • GNU Datamash is a command-line program which performs basic
      numeric, textual and statistical operations on input textual data files.

      it is designed to be portable and reliable, and aid researchers
      to easily automate analysis pipelines, without writing code or even
      short scripts.


    • clistats is a command line interface tool to compute statistics of a set of delimited input numbers from a stream such as a Comma Separated Value (.csv) or Tab Separated Value (.tsv) file. The default delimiter is a comma. Input data can be a file, a redirected pipe, or the manually entered at the console. To stop processing and display the statistics during manual input, enter the EOF signal (CTRL-D on POSIX systems like Linux or Cygwin or CTRL-Z on Windows).
    • This is a lightweight, fast tool for calculating basic descriptive statistics from the command line. Inspired by, this project differs in that it is written in C++, allowing for faster computation of statistics given larger non-trivial data sets.

      Additions include the choice of biased vs unbiased estimators and the option to use the compensated variant algorithm.

      Given a file of 1,000,000 ascending numbers, a simple test on a 2.5GHz dual-core MacBook using Bash time showed sta takes less than a second to complete, compared to 14 seconds using st.


    • Useful FILe and stream Operations

      • groupBy is a useful tool that mimics the "groupBy" clause in database systems. Given a file or stream that is sorted by the appropriate "grouping columns", groupBy will compute summary statistics on another column in the file or stream. This will work with output from all BEDTools as well as any other tab-delimited file or stream.

      • shuffle will randomize the order of lines in a file. In other words, if you have a sorted file, shuffle will undo the sort.

      • stats is a small utility for computing descriptive statistic on a given column of a tab-delimited file or stream. By default, it will assume you want to gather stats on the first column in your file/stream



statistics tool C • 3.2k views
ADD COMMENTlink modified 5.2 years ago by Istvan Albert ♦♦ 84k • written 5.2 years ago by Kamil2.0k

wow. These are great. Thanks for sharing the links.

ADD REPLYlink written 5.2 years ago by poisonAlien2.8k

That's great!

ADD REPLYlink written 5.2 years ago by Deepak Tanwar4.1k
gravatar for Istvan Albert
5.2 years ago by
Istvan Albert ♦♦ 84k
University Park, USA
Istvan Albert ♦♦ 84k wrote:

I've also used

CSVKit: a suite of utilities for converting to and working with CSV

ADD COMMENTlink modified 10 months ago by RamRS30k • written 5.2 years ago by Istvan Albert ♦♦ 84k
Please log in to add an answer.


Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 1490 users visited in the last hour