Actions

Statistical Analysis

Revision as of 19:35, 15 July 2019 by User (talk | contribs)

the science of collecting, exploring and presenting large amounts of data to discover underlying patterns and trends. Statistics are applied every day – in research, industry and government – to become more scientific about decisions that need to be made. For example:

  • Manufacturers use statistics to weave quality into beautiful fabrics, to bring lift to the airline industry and to help guitarists make beautiful music.
  • Researchers keep children healthy by using statistics to analyze data from the production of viral vaccines, which ensures consistency and safety.
  • Communication companies use statistics to optimize network resources, improve service and reduce customer churn by gaining greater insight into subscriber requirements.
  • Government agencies around the world rely on statistics for a clear understanding of their countries, their businesses and their people.[1]


Uses of Statistical Analysis[2]

Statistical Analysis may be used to:

  • Summarize the data. For example, make a pie chart.
  • Find key measures of location. For example, the mean tells you what the average (or “middling”) number is in a set of data.
  • Calculate measures of spread: these tell you if your data is tightly clustered or more spread out. The standard deviation is one of the more commonly used measures of spread; it tells you how spread out your data is about the mean.
  • Make future predictions based on past behavior. This is especially useful in retail, manufacturing, banking, sports or for any organization where knowing future trends would be a benefit.
  • Test an experiment’s hypothesis. Collecting data from an experiment only tells a story when you analyze the data. This part of statistical analysis is more formally called “Hypothesis Testing,” where the null hypothesis (the commonly accepted theory) is either proved or disproved.


Types of Statistical Analysis[3]

The two main types of statistical analysis are descriptive and inferential (see next section Descriptive Statistics Vs. Inferential Statistics). However, there are other types that also deal with many aspects of data including data collection, prediction, and planning.

  • Predictive Analytics: If you want to make predictions about future events, predictive analysis is what you need. This analysis is based on current and historical facts. Predictive analytics uses statistical algorithms and machine learning techniques to define the likelihood of future results, behavior, and trends based on both new and historical data. Marketing, financial services, online services providers, and insurance companies are among the main users of predictive analytics. More and more businesses are starting implementing predictive analytics to increase competitive advantage and to minimize the risk associated with unpredictable future. Predictive analytics can use a variety of techniques such as data mining, modeling, artificial intelligence, machine learning and etc. to make important predictions about the future.
  • Prescriptive Analytics:Prescriptive analytics is a study which examines data to answer the question “What should be done?” It is a common area of business analysis dedicated to identifying the best movie or action for a specific situation. Prescriptive analytics aim to find the optimal recommendations for a decision making process. It is all about providing advice. Prescriptive analytics is related to descriptive and predictive analytics. While descriptive analytics describe what has happened and predictive analytics helps to predict what might happen, prescriptive statistics aims to find the best options among available choices. Prescriptive analytics uses techniques such as simulation, graph analysis, business rules, algorithms, complex event processing, recommendation engines, and machine learning.
  • Causal Analysis: When you want to understand and identify the reasons why things are as they are, causal analysis comes to help. This type of analysis answer the question “Why?” The business world is full of events that lead to failure. The causal seeks to identify the reasons why? It is better to find causes and to treat them instead of treating symptoms. Causal analysis searches for the root cause – the basic reason why something happens. Causal analysis is a common practice in industries that address major disasters. However, it is becoming more popular in the business, especially in IT field. For example, the causal analysis is a common practice in quality assurance in the software industry.The goals of casual analysis:
    • To identify key problem areas.
    • To investigate and determine the root cause.
    • To understand what happens to given variable if you change another.
  • Exploratory Data Analysis (EDA) Exploratory data analysis (EDA) is a complement to inferential statistics. It is used mostly by data scientists. EDA is an analysis approach that focuses on identifying general patterns in the data and to find previously unknown relationships. The purpose of exploratory data analysis is:
    • Check mistakes or missing data.
    • Discover new connections.
    • Collect maximum insight into the data set.
    • Check assumptions and hypothesis.

EDA alone should not be used for generalizing or predicting. EDA is used for taking a bird’s eye view of the data and trying to make some feeling or sense of it. Commonly, it is the first step in data analysis, performed before other formal statistical techniques.

  • Mechanistic Analysis: Mechanistic Analysis is a not common type of statistical analysis. However it worth mentioning here because, in some industries such as big data analysis, it has an important role. The mechanistic analysis is about understanding the exact changes in given variables that lead to changes in other variables. However, mechanistic does not consider external influences. The assumption is that a given system is affected by the interaction of its own components. It is useful on those systems for which there are very clear definitions. Biological science, for example, can make use of.


Types of Statistical Analysis
source: Intellspot


Descriptive Statistics Vs. Inferential Statistics[4]

  • Descriptive Statistics: Descriptive statistics is the term given to the analysis of data that helps describe, show or summarize data in a meaningful way such that, for example, patterns might emerge from the data. Descriptive statistics do not, however, allow us to make conclusions beyond the data we have analysed or reach conclusions regarding any hypotheses we might have made. They are simply a way to describe our data. Descriptive statistics are very important because if we simply presented our raw data it would be hard to visulize what the data was showing, especially if there was a lot of it. Descriptive statistics therefore enables us to present the data in a more meaningful way, which allows simpler interpretation of the data. For example, if we had the results of 100 pieces of students' coursework, we may be interested in the overall performance of those students. We would also be interested in the distribution or spread of the marks. Descriptive statistics allow us to do this. How to properly describe data through statistics and graphs is an important topic and discussed in other Laerd Statistics guides. Typically, there are two general types of statistic that are used to describe data:
    • Measures of central tendency: these are ways of describing the central position of a frequency distribution for a group of data. In this case, the frequency distribution is simply the distribution and pattern of marks scored by the 100 students from the lowest to the highest. We can describe this central position using a number of statistics, including the mode, median, and mean. You can read about measures of central tendency here.
    • Measures of spread: these are ways of summarizing a group of data by describing how spread out the scores are. For example, the mean score of our 100 students may be 65 out of 100. However, not all students will have scored 65 marks. Rather, their scores will be spread out. Some will be lower and others higher. Measures of spread help us to summarize how spread out these scores are. To describe this spread, a number of statistics are available to us, including the range, quartiles, absolute deviation, variance and standard deviation.

When we use descriptive statistics it is useful to summarize our group of data using a combination of tabulated description (i.e., tables), graphical description (i.e., graphs and charts) and statistical commentary (i.e., a discussion of the results).

  • Inferential Statistics: We have seen that descriptive statistics provide information about our immediate group of data. For example, we could calculate the mean and standard deviation of the exam marks for the 100 students and this could provide valuable information about this group of 100 students. Any group of data like this, which includes all the data you are interested in, is called a population. A population can be small or large, as long as it includes all the data you are interested in. For example, if you were only interested in the exam marks of 100 students, the 100 students would represent your population. Descriptive statistics are applied to populations, and the properties of populations, like the mean or standard deviation, are called parameters as they represent the whole population (i.e., everybody you are interested in). Often, however, you do not have access to the whole population you are interested in investigating, but only a limited number of data instead. For example, you might be interested in the exam marks of all students in the UK. It is not feasible to measure all exam marks of all students in the whole of the UK so you have to measure a smaller sample of students (e.g., 100 students), which are used to represent the larger population of all UK students. Properties of samples, such as the mean or standard deviation, are not called parameters, but statistics. Inferential statistics are techniques that allow us to use these samples to make generalizations about the populations from which the samples were drawn. It is, therefore, important that the sample accurately represents the population. The process of achieving this is called sampling (sampling strategies are discussed in detail here on our sister site). Inferential statistics arise out of the fact that sampling naturally incurs sampling error and thus a sample is not expected to perfectly represent the population. The methods of inferential statistics are:
    • the estimation of parameter(s) and
    • testing of statistical hypotheses.


Statistical Analysis - Techniques for Summarizing Data Some of the most common techniques for summarising your data, and explains when you would use each one.

  • Summarising Data: Grouping and Visualising: The first thing to do with any data is to summarise it, which means to present it in a way that best tells the story. The starting point is usually to group the raw data into categories, and/or to visualise it. For example, if you think you may be interested in differences by age, the first thing to do is probably to group your data in age categories, perhaps ten- or five-year chunks. One of the most common techniques used for summarising is using graphs, particularly bar charts, which show every data point in order, or histograms, which are bar charts grouped into broader categories.

An example is shown below, which uses three sets of data, grouped by four categories. This might, for example, be men, women, and ‘no gender specified’, grouped by age categories 20–29, 30–39, 40–49 and 50–59.

An alternative to a histogram is a line chart, which plots each data point and joins them up with a line. The same data as in the bar chart are displayed in a line graph below.

It is not hard to draw a histogram or a line graph by hand, as you may remember from school, but spreadsheets will draw one quickly and easily once you have input the data into a table, saving you any trouble. They will even walk you through the process.


The important thing about drawing a graph is that it gives you an immediate ‘picture’ of the data. This is important because it shows you straight away whether your data are grouped together, spread about, tending towards high or low values, or clustered around a central point. It will also show you whether you have any ‘outliers’, that is, very high or very low data values, which you may want to exclude from the analysis, or at least revisit to check that they are correct.

It is always worth drawing a graph before you start any further analysis, just to have a look at your data.

You can also display grouped data in a pie chart, such as this one. Pie charts are best used when you are interested in the relative size of each group, and what proportion of the total fits into each category, as they illustrate very clearly which groups are bigger.

Measures of Location: Averages: The average gives you information about the size of the effect of whatever you are testing, in other words, whether it is large or small. There are three measures of average: mean, median and mode.When most people say average, they are talking about the mean. It has the advantage that it uses all the data values obtained and can be used for further statistical analysis. However, it can be skewed by ‘outliers’, values which are atypically large or small.

As a result, researchers sometimes use the median instead. This is the mid-point of all the data. The median is not skewed by extreme values, but it is harder to use for further statistical analysis.

The mode is the most common value in a data set. It cannot be used for further statistical analysis.

The values of mean, median and mode are not the same, which is why it is really important to be clear which ‘average’ you are talking about.

Measures of Spread: Range, Variance and Standard Deviation: Researchers often want to look at the spread of the data, that is, how widely the data are spread across the whole possible measurement scale.

There are three measures which are often used for this:

The range is the difference between the largest and smallest values. Researchers often quote the interquartile range, which is the range of the middle half of the data, from 25%, the lower quartile, up to 75%, the upper quartile, of the values (the median is the 50% value). To find the quartiles, use the same procedure as for the median, but take the quarter- and three-quarter-point instead of the mid-point.

The standard deviation measures the average spread around the mean, and therefore gives a sense of the ‘typical’ distance from the mean.

The variance is the square of the standard deviation. They are calculated by:

calculating the difference of each value from the mean; squaring each one (to eliminate any difference between those above and below the mean); summing the squared differences; dividing by the number of items minus one. This gives the variance.

To calculate the standard deviation, take the square root of the variance.

Skew The skew measures how symmetrical the data set is, or whether it has more high values, or more low values. A sample with more low values is described as negatively skewed and a sample with more high values as positively skewed.

Generally speaking, the more skewed the sample, the less the mean, median and mode will coincide.

More Advanced Analysis Once you have calculated some basic values of location, such as mean or median, spread, such as range and variance, and established the level of skew, you can move to more advanced statistical analysis, and start to look for patterns in the data.


Statistical Analysis Software[5]

Since not everyone is a mathematic genius who is able to easily compute the needed statistics on the mounds of data a company acquires, most organizations use some form of statistical analysis software. The software, which is offered by a number of providers, delivers the specific analysis an organization needs to better their business.

The software is able to quickly and easily generate charts and graphs when conducting descriptive statistics, while at the same time conduct the more sophisticated computations that are required when conducting inferential statistics.

Among some of the more popular statistical analysis software services are IBM's SPSS, SAS, Revolution Analytics' R, Minitab and Stata.


See Also

References

  1. Definition - What Does Statistical Analysis Mean? SAS
  2. What is Statistical Analysis Used For? Statistics How to
  3. Types of Statistical Analysis Intellspot
  4. Descriptive Statistics Vs. Inferential Statistics Laerd
  5. Statistical Analysis Software Business News Daily


Further Reading