Data Visualization: Techniques, History, And Evaluation

Steps for Visualizing Data

According to Shewan and Dan (5 October 2016), the field of data visualization is an interdisciplinary one that deals with the visual presentation of data. When the amount of information to be conveyed is large, this method is particularly effective. These visual elements can be used to study how the original data (often numerical) is translated into a visual representation (for example, lines or points in a chart). As a result of using this data, the mapping specifies how the characteristics of these items will change over time. The length of a bar and the magnitude of a variable can be mapped using a bar chart in this context. Because a chart’s readability can be harmed by the mapping’s graphic style, ( Scott and Berinato,  June 2016).Statistics has a long history, and data visualization is often considered a subset of descriptive statistics because of its origins in the statistical profession. However, since good visualization requires the application of both design talents as well as statistical and computer abilities, writers such as Gershon and Page contend that it is both an art and a science. It is becoming clearer that research into how people read and misunderstand different forms of visuals is assisting in determining which types and aspects of visualizations are the most understanding and successful in communicating information (Viegas et al 2011).

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

1.In order for data to be displayed in a visual manner, an organization must first gather the data that will be used in the visualization process. Sales data or marketing surveys are easy ways for most businesses to get this information from their customers.

Parsing the data: The data that is gathered by an organization is available in a variety of forms. In order for an organization to display data, they must first identify areas of consistency within the data. As a result, the data gathered is transformed into a format that can be readily displayed using the tools that are already accessible.

3.The third step in data visualization is data filtering. Data filtering is important because it helps to remove any sort of bias that might interfere with the data results. Data filtering finds and eliminates duplicate data while also ensuring that the data being examined meets the criteria that have been established.

Data mining: Data mining is the process of converting raw data into meaningful information once it has been acquired. This may also be referred to as data analysis in its most basic form.

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Specific Objectives

5) Data representation: During this stage, the individuals in charge of data mining choose which symbols and components will be used in order to depict the various qualities of data. To symbolize the number of deliveries made per track, a picture of a vehicle, for example, may be utilized.

6.The process of data refining guarantees that only relevant and homogenized data is displayed, which has been precisely represented in a manner that consumers can comprehend and point out inconsistencies.

To evaluate the results and determine the effectiveness of the guidelines and theories.

To measure the extent to which the use of shapes other than rectangles cause misperception of data and/or increase the time taken to interpret data in commonly occurring graph types.

To shed light on how reading data from graphs drawn using other shapes rather than the normal rectangular.

To investigated how effective adding error bars was at reducing the within-the-bar bias.

To investigate the misconception brought up by graphs that present data using some complex shapes rather than the normal rectangular shapes.

To determine the effect of different types of visualization and viewer on the magnitude estimates and the intra and inter-rater reliability.

To compare the time taken to design the graphs with rectangular caps with the graphs with complex shapes.

To study the use of regular and irregular shapes in data visualization. 

This paper subsequent chapters contains the methods and theories evaluations of research into data visualization theories, procedures used and commercial applications of visualizations involved in testing, calculating, and analyzing visualizations.

There are no complete previous researches on visualization. Data visualization history is fragmented, with little attention paid to how different disciplines have contributed. For the first time, a comprehensive history of visualization is being put together by Michael Friendly and Daniel J Denis. Data visualization has been around for a long time despite available documentation. Since the Pleistocene era, caves (such as Lascaux Cave in southern France) have been used to record information about the stars and their positions. Quantitative information may also be represented graphically in the form of physical artifacts (Tufte, 2001).

With the Turin Papyrus Map, which dates back to 1160 B.C., it is possible to trace the origin of data visualization. According to the term “thematic cartography,” these maps are examples of data visualization that use a geographical demonstration to convey a certain theme linked to an area of the world. Many cultures’ thematic maps and hieroglyphs and ideograms, which illustrated information and allowed for interpretation, were among the earliest known forms of data visualization. Linear B tablets at Mycenae, for example, depicted Mediterranean commerce throughout the Late Bronze Age. At least 200 BC, Egyptian surveyors used something akin to latitude and longitude to locate both earthly and celestial positions. ” Historical developments in visualizations can be traced back to paper and parchment. An appendix to a monastic school textbook from the 10th or 11th centenary included this diagram as an illustration of the movement of the planets (Berinato, 2016). Using the graph, it appears that the proclivity of the planets’ orbits was plotted as a function of time. The time or longitudinal axis was a horizontal line split into thirty equal parts to represent the zodiac zone on a plane. On the zodiac’s vertical axis, you can see how wide each sign is. Because of the discrepancies in the times, it seems that the horizontal scale was customized for each planet separately. Only the amplitudes are discussed in the preceding paragraph. There seems to be no temporal relationship between the curves (Wexler, et al., 2017).

General Objective

Tycho Brahe’s observatory included a “wall quadrant,” which covered a full wall, by the time he built his instrumentation in the 15th century for observing and measuring physical quantities, as well as geographic and celestial location. The invention of triangulation and other ways to precisely map places was particularly significant. Scholars began developing new methods of displaying data as early as 1596, when Lorenz Codomann and Johannes Temporarius both published ground-breaking works on the topic. Two-dimensional coordinate systems invented by René Descartes and Pierre de Fermat have had an enormous impact on the actual techniques of presenting and calculating data. It is because to the work of Fermat and Pascal on statistics and probability theory that we currently know as data. For William Playfair, who identified the potential for visual representation of quantitative information in these innovations, they permitted and assisted him to conceive and develop graphical techniques for statistical analysis (Andrews, 2013).

Jacques Bertin employed quantitative charts to portray data, simply, precisely, and correctly in the second part of the twentieth century. John Tukey’s innovative statistical method to exploratory data analysis opened the door for more people than only statisticians to refine their skills in data representation. With the advancement of technology, information visualization has evolved from hand-drawn images to more sophisticated applications, such as interactive designs and software representations. In the discipline of statistics, systems like as SAS, Minitab and others provide information visualization. Quantitative data may be visualized using computer languages like D3, Python, and JavaScript, as well as other, more specialized applications. To fulfill the need for studying data representation and accompanying programming libraries, private institutions like The Data Incubator and General Assembly have launched free or paid programs to meet that demand (Visually, 2019). Since 2013, Art Centre College of Design and JPL have sponsored an yearly program on effective information visualization, beginning with the symposium “Data to Discovery.” To what extent can interactive data representation aid scientists in their exploration of the data they have? (IBM, 2019).

Data Visualization theory is needed to help users choose the appropriate tools that can be used to perform analysis and exploration. In the same way, tool designers could benefit from the same theory. The following argumentation aims to shed light on the potential nature of this theory. A majority of researches in information visuals agree that the main use of data representation is to ensure better understanding of their data and the underlying processes. Developing a conceptual is one way to gain comprehension. As a model is a pared-down representation of the data rather than an exhaustive list of every single data item, this means that a model has to be abstracted. It is possible, for example, to develop an understanding of temperature change by keeping track of morning temperatures over a period of time. 

Literature Review

This type of abstraction is based on a comprehensive understanding of the characteristics of multiple data items. The term “pattern” will be used to describe these features. Patterns can take many forms, two of which are increase and decrease. Each of the patterns in the data can be combined to form a single model. Consequently, if long-term measurements of morning temperatures are made, the model is likely to incorporate both the rise and fall of the temperature. In addition, patterns can be broken down into sub patterns. For example, the rise and fall of temperature can be viewed as a cyclical “wave” that repeats itself. The “wave” pattern is a composite of the rising and falling patterns is a higher-level pattern that incorporates the “wave” pattern (Newell et al,2016). 

The primary function of Information Visualization tools is to help the user identify trends that can be used to build an effective model. When it comes to data visualization, this implies that a tool should make it easier for users to see data as a whole. There must be a clear understanding of what kinds of patterns are needed in order for a tool to be used effectively in the detection of patterns. then it will be simple to describe the goal of the tool and train users on how to identify what kinds of patterns the tool is focused on after its completion (McFadden, 2018). The nature of patterns must be useful to data under investigation. An analysis of temporal numeric data should look for basic patterns such as growth, decrease, stability, fluctuation and peak and low point. Groups of elements with similar measurement values and frequency-related patterns are possible when the data is referring to a discrete, unordered set, such as melting temperatures for various substances. Alternatively, outliers (extremely high or low) can also be found (outliers). 

To enable the creation and usage of Information Visualization tools in the way described above, a theory is required that can predict what patterns will be found in a particular dataset or class of datasets. To avoid the idea that we are aiming to forecast (and, thus, automatically discover), all of the precise patterns buried in specific data, we stress the word “types.” Thus, a forecast that a group of data may include groups of items with same features does not identify what precise clusters are present in that dataset. Users and tool designers, on the other hand, realize that they require a tool that aids in the discovery of clusters (Stowers, 2013). If each Information Visualization tool and approach has a unique signature, users will be able to pick the proper one. We’re calling this theory “data-centered predictive theory” in this section. It is necessary to incorporate the following ideas into the theory: 

  1. Asuitable generic framework for describing various typesof data and their structures;
  2. A classification system for trends and pattern in general;
  3. Data characterisation may be used to infer potential pattern types.

History

As a collection of records with a similar structure, data is composed of components (e.g., numbers or strings) that either represent the outcomes of certain observations, or define the context in which they were collected. This comprises the place, time, and item of the measurement. ‘Values’ refers to the individual data items that make up a data record. Assuming all datasets are structured similarly, each position has a specific meaning that is shared by all the values in it. In order to distinguish between them, these positions can be given names. Positions are commonly referred to as data components (Department for Communities and Local Government, 2009). 

Temperature or wind movement are two examples of climate attributes that can be measured by referring to the geographic location and the time of day. Each point and time serves as a reference, and the mix of atmospheric temperature and air movement that corresponds to that location or time period is a feature of that location or period. Having two referrers makes this a two-dimensional dataset; attributes are not considered dimensions. Referrers are independent components since their values are context dependent. Then there’s dependencies and attributes. A dataset’s referrers must all be dealt with at the same time, regardless of how many attributes are selected for analysis (The Art of Consequences, 2018). Functioning in the mathematical sense, data can be thought of as a function with the referrers and attributes as independent and dependent variables. There can only be one combination of attribute values for each combination of attribute values in the referential component. Determining which elements belong to the dataset as referrers and attributes helps define its structure. In addition, the properties of the components must be specified. Whether or not there are distances between the elements is one of the most important characteristics. Discrete sets, such as a collection of integer values representing the numbers of various items, can have distances, but so can any continuous set, such as time, space, or temperature values. There are no distances between the discrete set of substances.

How the elements are organized is a question of whether and how they are arranged. As a result, time events are linearly or cyclically organized, depending on the time span of observations. When creating a set of values, it is important to keep in mind that it does not inherit properties from its constituent parts. Despite the fact that melting temperature and atomic weight are fully ordered, a set of possible combination values is only partially ordered (Rensink & Baldridge, 2010).

The Gestalt school of psychology, founded in 1912 in Germany, acknowledged that human beings view different ideas and things in a particular way before processing; hence this led the school to develop six principles explained below. (Few, 2013) 

1.Similarity: According to this Gestalt concept, we instinctively group items that are similar in terms of characteristics such as size, tone, or orientation together. 

  1. Proximity:The concept of proximity suggests that items that are close to one another are more likely to be perceived as a group. 
  1. Continuity: According to this principle, we will perceive elements arranged on a curveor a line as related to each other, while elements that are not on the line or curve are seen as separate. 

4.Closure: This implies that the components of a sealed object will be regarded as a group. Humans would even fill in the gaps where content is lacking in order to provide closing and give meaning of an object. 

  1. Common region:The Gestalt psychology concept of common region suggests that humans are more likely to classify things alongside if they are all situated inside the same restricted area. 
  1. Prägnanz: According to this fundamental concept, humans naturally see events in their most basic form or order. 

Few,2013 claim that by using the principles explained above in understanding visualizations, human beings can be able to get a better of the challenges of creating important information stand out, organizing visualizations in a meaningful and efficient manner, and understanding why and how visualizations work, among other things. As a result, these ideas will be incorporated into the experiment to see if they work in practice (Spence,2005).

Different from research methodologies: Research methods are the ways you will get the data for your research project, not the ways you will do the research itself. The best way to do your project will depend on what you’re studying, what kind of data you need, and who or what you’re going to get it from. The following boxes have a list of quantitative, qualitative, and mixed research methods. They are all below. 

People who take closed-ended questionnaires or surveys have to choose from a list of answers that have already been written down. To choose, they must pick the one that they agree with the most. This is the simplest way to do quantitative research because the data is easy to combine and figure out how much it is.

Structured interviews: These are a common way to do market research because the data can be weighed. They are very strict, so there is very little “wiggle room” in the interview process. This way, the data won’t be skewed. The structured interviews can be done in person, on the internet, or over the phone (Dawson, 2019). 

In order to make sure that your questions for a survey or questionnaire are both accurate and clear, there are things you can do when you are writing them: (Dawson, 2019): 

  • In order to keep the questions short and simple, you should keep them short and simple, 
  • Remove any possible bias from your questions. Do not let them say things that favor one point of view over another. 
  • The subject of your question might be very personal, so you might want to ask indirect questions instead of direct ones. This way, people don’t feel intimidated and don’t want to say what they really think. 
  • Because this is an open-ended question, try to think of every answer that a person could give. 
  • The person who is being asked the question should not be asked questions that make assumptions about them. Before asking how often a person exercises, you should ask if they do it at all to make sure that they do. 
  • To make the questionnaire as short as possible, try to keep it simple. The longer a questionnaire is, the more likely it is that the person who is taking it will not finish it or be too tired to give honest answers. 
  • At the start of the questionnaire, you should tell your participants that they will not be judged. 

Choosing a quantitative approach to your research means that you need to figure out what kinds of measurements you will use in your study before you can start. This will help you decide what kind of numbers you will use to collect your data. There are four ways to measure: 

These are numbers where the order of the numbers doesn’t make a difference. They want to separate different types of information. One example is getting zip codes from people who take part in research. The order of the numbers doesn’t matter, but each zip code has a different set of numbers that mean different things (Adamson and Prion, 2013). 

Ordinal numbers are also called rankings because the order in which they are shown is important, so they are called rankings. This is when things are given a certain rank based on a set of rules. Ordinal measurements are often used in ranking-based questionnaires, where people are asked to rank items from least favourite to most favourite. Then there’s a pain scale, where a patient is asked to rate how bad their pain is on a scale from 1 to 10. This is another example (Adamson and Prion, 2013). 

An interval is when a researcher looks at the distance between numbers in a way that is important to him or her (Adamson and Prion, 2013). Each number is the same distance away from the next one, so the distance between them is the same. Test grades are an example of data that changes over time. 

This is called a “ratio,” and it happens when the data are in a certain order and have a “zero point.” This means that there could be no measurement of what you are measuring in your study at all, which is not good (Adamson and Prion, 2013). The height of something is an example of ratio data because the “zero point” stays the same in all measurements. The height of a thing could also be nil. 

This is when a small group of individuals meet together to talk about a certain subject. Can be referred to group interviews or group discussions (Dawson, 2019). Most of the time, they have a moderator who helps guide the conversation and asks certain kinds of questions. It is very important for a moderator to make sure that everyone in the group speaks so that no one dominates the conversation. There are a lot of thoughts, opinions, and perspectives that can be found in focus groups.

Advantages of Focus Groups 

  • Having one meeting is all that is needed to get a wide range of responses from people. 
  • There is less researcher bias because the people in the study can talk freely. 
  • Helps people who are afraid or insecure about a subject get over their feelings. 
  • The researcher can also think about how people interact with each other. 

Disadvantages of Focus Groups 

  • Participants may be afraid to stand before a group, especially if the issue is touchy or controversial. 
  • Because it’s up to each person to join in the discussion, not everyone will be able to contribute equally. 
  • It’s possible for people to change what others say or think. 
  • A researcher might be afraid of running a focus group on their own. They might not know how to do it well. 
  • Extra funds or resources may be needed for a safe place where to hold a meeting with people who are interested in what they say. 
  • Because the data is collective, it may be hard to figure out what each person thinks about the research topic. 

It is possible to do research observations in two ways: 

Direct observation: The researcher looks at a person in an environment. Taken notes or technology like a voice recorder or a video camera are used by the researcher to get data, like in a study. The researcher doesn’t get in the way of or mess with the people who are taking part. Often, this method is used to study psychology and health (Dawson, 2019).

Participant observation: The researcher gets to know the people who are taking part in the research by talking to them. This is a common way to learn about a new culture or community. Overt and covert observations can be unethical in some situations, so you need to decide which one you want to do. It is important to figure out whether you want to do a covert or an overt one (Dawson, 2019).

Because the answer boxes are left open, these questionnaires are the opposite of “multiple choice” questionnaires. This means that people can write short or long answers to the questions. Researchers often “quantify” the data after they have gathered the responses. They do this by grouping the responses into different groups. This can take a long time because the researcher has to read each answer very carefully.

For comparison purposes, this is the most frequent sort of interview in which researchers are looking for particular information. To do this, you’ll need to ask the same questions in each interview, but allow the interviewees to be more flexible with their answers. If a person responds in a specific manner, then follow-up questions should be included. Interview schedules, which contain the themes and questions to be covered at each interview, are often used to assist interviewers (Dawson, 2019).

Using a theoretical framework to analyse data about a topic, a researcher often employs theoretical analysis in their work with animals or other nonhuman subjects. Researcher’s theoretical framework gives them a specific “lens” through which they can examine their topic. As a result, it serves as a springboard for further investigation. In the study of literature, films, and other forms of media, this is a common research method. Using this method, you can apply more than one theoretical framework, since many theories work together. The most known theoretical frameworks for qualitative research are (Grant and Osanloo, 2014):

  • Behavioural theory 
  • Gender theory 
  • Change theory 
  • Marxist theory 
  • Cognitive theory 
  • Developmental theory 
  • Cross-sectional analysis 

An interviewee’s point of view on a topic or situation is explored in detail through these in-depth interviews. They’re also known as “life history interviews” in certain circles. To ensure that the interviewee is able to freely express their opinions, it is crucial not to ask too many questions (Dawson, 2019).

Open-ended and closed-ended questionnaires: This method entails incorporating components of both questionnaire kinds into data gathering. Respondents may respond to certain questions with prefabricated responses and write their own replies to other questions. The benefit of this technique is that you gain from both methods of data collecting to acquire a better picture of you participants. However, you must think carefully about how you will study this data to make a conclusion.

Other mixed method techniques that include quantitative and qualitative research methodologies rely greatly on the study issue. It is extremely advised that you engage with your academic adviser before concluding a mixed method strategy.

What criteria do you use to choose the most appropriate research approach for your project? It all depends on what you’re trying to accomplish with your study. When deciding on the ideal research approach for your topic, you should ask yourself these questions, according to Dawson (2019):

  • Do you have a good grasp of maths and numbers? 
  • What’s your financial plan? Do you have enough money to do this research in the way you want? 
  • Would you’d be interested in interviewing people? 
  • Do you like the idea of creating a survey that people can fill out? 
  • Is it easier for you to communicate verbally or in writing? 
  • Do you have any special abilities or knowledge that could aid you in your research? Do you have any prior research project experience that you could draw on to assist with this one? 
  • Do you have enough time to finish the research? Data collection can take a long time depending on the method being used. 

The graph must include accurately labeled axes with a data key, as well as an appropriate title that briefly describes what the graph shows in order to be recognized as excellent and complete (Waskom et al., 2021). A graph that does not include at least one of these characteristics should be deemed a poor graph since it may be deceptive and may not provide the proper information. In order for a graph to be regarded proper and acceptable, the developer of the chart must make certain that the ratio of the ink utilized to the information in the graph is at a consistent level of proportion. 

There are no labels on the chart. The plot is not labeled with a title. Although the information is color-coded, there’s really no key. The chart is not immediately comprehendible on the first glance. There are an excessive number of data categories in this chart. The months identified on the graph are skipped through on a regular basis, as seen in the graph.

  1. The use of visuals rather than numerical representations. When a developer chooses to utilize images instead of numbers, it is possible that the goal of the chart may be misunderstood. It is possible that the data on the graph will be misinterpreted by the reader.

If the reader does not understand how the data depicted on the chart is calculated in this image, he or she may be misinformed. The pictures have a tendency to occupy different ratios on the graph, which can make it very difficult to understand the data and collect the necessary data for analysis. Instead of using dots or numbers, the designer should have done so, as this would have made the graph’s interpretation much easier and clearer to understand. The majority of graphs that make use of images are intended to advertise instead of to convey important information. 

In this chart, it may be quite hard for the reader to gain the correct numbers that they may be needed to read from the chart. The irregular coloring of the graph may be pleasant to the eye, but at same time, the precision of the desired information is lost. The uneven color coloring of the chart make it a bd graph.

Designers may create gaps in their graphics in order to draw their customers’ attention by making the charts visually appealing. Leaving gaps in the plots omits highly significant information, causing the figures to be erroneous and misleading.

When the reader tries to extract the data from the gapped area of this graph, the gaps on the graph may cause a great deal of confusion (Edward Tufte, 2020). The designer forces the reader to make assumptions that have a high likelihood of being erroneous in the first place (Chen et al., 2021). The designer should do all in their power to ensure that the graphs cover each and every point that they aim to cover. Complete lines on the graphs should be used to accomplish this.

Complex forms are used on graphs instead of the intended rectangular shapes, according to the graph designers (Du et al., 2021). When graphic designers utilize complicated forms on bar graphs, such as triangular or circular shapes, the accuracy of the graph is compromised, and the viewer may not be able to get the information he or she needs. 

When graphs are designed with this sort of complicated characteristics, the proper information contained inside the chart is lost. The data’s precision has been compromised. Although the graphs are visually appealing, the information that they are intended to convey is incorrect and imprecise in their application.

Designers construct erroneous graphs by enlisting the help of three-dimensional models rather than two-dimensional models. Graphs that make use of three-dimensional space are very complicated and difficult to extract information from.

Obtaining information from the above graph is quite difficult. 3D alters the visualization of the graph and changes the intended information of the graph (Sayed et al., 2021). Although the designer intends to attract his or her consumers, he or she may also mislead the consumer, therefore disappointing the consumer as he or she has not acquired the information they needed.

In order for the reader to be able to contrast and differentiate the different areas and regions of a graph, good graphs have a clearly defined and visible range, as well as distinct colors. Because the values are presented in a sized format, readers of a good graph do not have to strain their eyes to read them. In order for a reader to understand the information presented in a graph, it is necessary to label the amounts and their respective units or values in a manner that is clear to the reader. Good plots have a good design, which means that they present the values in a manner that is consistent with that unit.

When creating a good graph, less ink should always be used, and the ink used should correspond to the amount of data being represented on the chart. Each bit of data displayed on the graph should be clear and easily obtained by the designer, so that the standard ink ratio should be used by him or her.

It is very simple to decipher the information in the graph above. The reader will not have to exert any effort in order to obtain the information he or she requires. The amount of ink used in the graph is kept to a bare minimum and corresponds to the data contained within the chart (Wang et al., 2021). The graph provides accurate information in the manner in which it was intended.

Good graphs are two-dimensional rather than three-dimensional. The ink within the 2D graphs has been effectively utilized. The information contained within the graph is clear, and anyone can obtain information quickly and without difficulty. All of the information in the chart is easily discernible. 

The creator of this chart has utilized 2D to draw the graph. The plot above is really clean, accurate, and straightforward to collect data.

A decent graph should employ rectangular forms rather than other complicated designs that may be deceptive. For a designer to make his or her chart excellent and right, they should guarantee that they avoid employing complicated forms on the ends of the chart (Danisch et al., 2021). The usage of rectangular forms makes it easier for the reader to receive the proper information from the plot.

For a long time, the perspective of how to portray data acquired from certain research in bar graphs has caused a great deal of controversy. A large number of individuals have been using bar graphs to display their data as they examine various elements of their study for a very long time (Barnett et al., 2021). Researchers have lately altered the form of bar graphs while they are in the process of constructing the graphs themselves. The graphs have been adjusted by the researchers, and instead of utilizing the standard rectangular-shaped bar chart, the researchers have chosen to modify the form of the graphs in order to include some complicated shapes in them. The designs have had an impact on the veracity of the information that has been provided. The designs have also increased the likelihood of errors in the data presented in the graphs.

When developing any form of infographic, whether charts or other kind of infographics, the most important thing that the designer must consider is the size, the form, and how easy the graph or infographic should look to the customer when seen from a distance. When a designer is creating a map or even a chart for their customers, the designer must take into consideration the fundamental features of data visualization.

When employing visualization, it is important to consider a variety of factors. While most people identify visualization with the ratios of size and form, there are additional considerations that should be taken into account, such as the aspect of scale in relation to the size and shape. The importance of the ratio scale may be seen in the fact that it allows the chart and all of its intricacies to be shown on a single page. Most of the time, persons who are provided with graphical information have little knowledge of data processing and analysis, and even those who do have some knowledge have limited time to devote to studying the information. When designing a data visualization, it is important to consider the simplicity of the visualization in order to make it simple for anyone to understand the data being displayed. The visualization should be aided by appropriate colors in order to maintain the viewers’ attention and suitably differentiate data details.

When graphs are used to depict data visualization, it is critical to avoid utilizing any potentially misleading symbols, such as superfluous arrows and graph texts. I believe that simplicity is one of the most important aspects since it caters to all types of viewers, regardless of their ability to comprehend what is being said. Finally, your ability to portray yourself must be sufficient to suit the demands of your audience. Data representation abilities should be exercised, just like any other talent, in order to discover areas that demand development over time (Sorapure, 2019). Incorporating effective rehearsal of the created data visualization into the planning process will aid in ensuring that the data is well comprehended. The second aspect is that you must communicate well and explain the specifics of the data you are providing.

In my opinion, expertise is a critical element because it helps the composer of graphics to personally grasp the data, remember the concepts, and communicate the secret narrative that lies concealed inside the data in the most appropriate manner.

Apparently, a competition amongst designers in the area of infographics has been sparked, according to the author of this piece (Clarinval et al., 2021). A variety of designs utilized by a variety of designers have prompted contests to determine which design is the greatest and which designs should be collaborated on in order to achieve the desired goals. As a result of the competition, participants have gained a new understanding of how data should be displayed on graphs and what kind of designs should be employed to show the information. Designers and other people who use charts to express their idea have recently overwhelmed the Internet with hundreds of infographics of various designs as a result of the race that has been created in the world of infographics, according to the designers and other individuals using charts to present their work. Despite the fact that there is a lot of competition, good graph designs and designs with novel features tend to bring about more competitive pressure in the race for eyeballs than bad graph designs.

Graphs become increasingly visually appealing as designers in the realm of infographics continue to create more and more charts with a variety of distinct patterns and styles. It has long been the objective of data visualization designers to enhance the data representation of these charts by adjusting and embellishing the graphs’ structure, appearance, and the manner in which they are presented. The designers have modified the designs of the bar graphs from utilizing rectangular bar graphs to employing various complicated shapes in the graphs in order to win the race in the realm of information graphics.

In the process of modifying the designs of bar chart, the designers fail to improve the quality of the data provided, resulting in the majority of bar graphs being erroneous and incorrect, despite the fact that they are visually appealing and eye-catching. The quality of the results provided in bar graphs is considered to be the most significant component of the graph, but the designers have minimized this aspect in favor of the aesthetic of the graph. As an alternative to the more traditional rectangular bar chart, designers have chosen to utilize more complicated forms such as circular and triangular diagrams at the very top of the bar chart instead. When the reader looks at these forms, his or her thoughts are redirected and the reader gains a new idea that is different from what was anticipated.

Based on the literature study and other criteria that were taken into consideration throughout the research procedure, the researcher anticipated a number of findings from the data. 

  1. The scholarly hypotheses, with the possible exception of the radar chart, would be mostly true in their conclusions. 
  1. The most widely used visualisations would yield the greatest results, of course. 

3.The more intricate visualizations would be the least popular. 

  1. Males outperform females in spatial ability, younger participants beat older participants, and individuals with STEM degrees and/or employment outperform non-STEM participants.

2.Those who have had a higher education would outperform those who have received a lesser education.

3. The findings will be negatively impacted if the devices and displays are too small. 

The first thing that needed to be considered was which visualisations will be utilized in this particular experiment. In light of the conclusion made from the literature research, which was that they had to conform to the basic and data-driven school of thought, it appeared that starting with the data would be the most natural place to start. 

Using two experiments, this study will attempt to map the fundamental characteristics of generalizability as well as the causes of within-bar bias. The research will look at the potential of a link between graph literacy and bar bias in a bar graph. 

All of the participants in this experiment were undergraduate students from a local institution. The sample size was 458, with participants ranging in age from 10 to 60 years (median age of 19 and skewness of 4.38).

It was decided to conduct the research using a questionnaire conducted in a university laboratory, with all materials being submitted electronically. The participants were presented with a series of hypothetical health-related scenarios to decide how they would respond. It was assumed that the participants’ blood glucose level was 120 mg/dL the previous week in this hypothetical case. However, tests performed in the current week revealed that the glucose level fluctuated between – and +20 percent of the baseline. A significant variation of glucose levels from the optimal range might have serious health repercussions for all individuals, and their average change over a week was zero for all participants.

Using a link disseminated on social media platforms, the researchers were able to recruit the participants for this experiment entirely online. The experiment drew a total of 672 participants, but only 612 were able to complete it successfully. The experiment contained one person who was under the age of consent, and that subject was deleted from the research, resulting in a final study with 611 participants ranging in age from 18 to 77. The experiment took an average of 15 minutes to complete, according to the study participants. The mean age of the participants in the trial was 33 years, with a skewness of 0.89. The following were the demographics of the participants in the experiment: 

Category

Proportion

High school

8%

College or Associate degree

37%

Bachelor’s degree

41%

Master’s degree or higher

14%

The experiment made use of an online questionnaire, and the respondents were told in advance about the type of data being utilized and its provider before taking part in it. Respondents were also supplied with other info that was relevant to the research. 

  • A Microsoft Excel spreadsheet with the results was created and saved securely in the researcher’s Google Drive account. 
  • Because of the low number of participants and the emphasis on processes rather than results, simple data analysis approaches such as counting correct responses and computing the mean were possible. 

Research participants were randomly assigned to the experimental conditions where the 90 participants took part in the numerical control condition. Remaining participants were presented with both numerical and graphical information. The raising condition had an n= 91, the bar was raised from a lower x-axis but descended on from an upper x-axis in the falling condition (n=89). Graphs depicting within error bars while raising and falling (n=93 and n=95) were similar to those in the first two conditions while the y-axis scale ranged from -20 – 20.

Participants were expected to proceed with a treatment of choice based on the information provided. Their preferred treatment could increase or decrease their blood glucose level. Each participant would provide their answer using a slider scale with the three options reduce, increase and remain constant and the slider’s values ranged from -50 to 50. Participants were allowed unlimited time to read and answer the questions. 

The results of the experiment as indicated in the figure 1 below. Indicates that participants subjected to rising bars preferred to increase their blood glucose levels regardless of the numerical conditions presented. The opposite was observed in participants who were subjected to falling bars.

This experiment also sought to determine the extent to which error bars and graph literacy reduced within bar bias. This was accomplished by reversing the signs of preferences conditions with a falling bar to allow comparability with the raising bars. A negative sign indicated the unexpected direction and vice versa. A linear regression model with the following features was developed to predict the bias scores;

Skewness = 0.33 

Presence of error bars= +1 

Absence of error bars= -1 

The regression model proved not to be a reliable predictor of bias in which meant that the bias scores were not associated with any predictor. 

Value 

Β

T

p

Literacy

-0.01

0.10

0.92

Error bars

-0.05

0.93

0.36

Interaction term

-0.05

0.93

0.35

Given the parameters in this experiment, the results indicate that the magnitude of within the bar bias is not a robust function of graph literacy and there was insufficient evidence to conclude that inclusion of error bars had in reducing bias. The results also indicate that the bias was larger in falling bars conditions (Mean = 7.53, Standard Deviation = 20.70) than (Mean = 3.11, Standard Deviation = 17.01) for the rising bars condition. 

The pie chart above indicates that young people can withstand high sugar levels for a long time than older people. 

The experiment indicates that an error in interpreting graphical information can have serious impacts. In the experiment, research participants were warned that deviations in glucose level in the blood can have severe health effects. Health is a critical issue that most people take seriously. The effect caused by within the bar bias indicates how serious bias in graphical information can be when applied in other areas such as business. 

The results of this experiment are similar with prior research that demonstrated that most individuals prefer to perceive data points covered by the boundary to be part of the distribution while those left out are not (Okan et al., 2016). (Okan et al., 2016). The findings demonstrated that the biased individuals exhibited a strong desire to change their blood glucose level even without adequate evidence or a legitimate rationale. This suggests that prejudice might predisposition decision makers to make erroneous and unreasonable conclusions. 

Furthermore, the findings suggest that graphical literacy may not be a factor in determining the magnitude of a people within the bar bias, given that participants who demonstrated a high level of graph literacy were biased to the same extent as participants who demonstrated a low level of graph literacy. Despite the fact that this was an unexpected result, it may be explained by the fact that the participants were able to extract information from both the graph and the text with ease. Participants with poor levels of graph literacy, on the other hand, would have an easier time extracting important information from the text. Furthermore, since the graphs included fictitious information and were configured in an unconventional manner, it is possible that the majority of participants’ attention was drawn away from the text. According to the findings of this study, if all participants pay equal attention to the graphs, it is possible to accurately determine the bias. The trend at the descriptive level shifted in the direction that was predicted for ascending graphs. This discovery, on the other hand, does not imply that the inclusion of error bars will definitely result in a reduction in within-bar bias. 

The purpose of the second experiment was to determine the extent to which bias in graphical analysis can affect the preference and likelihood ratings when compared to common ecological materials, and the results of the first experiment were used to make this determination. 

This experiment revealed that within-bar bias had a statistically significant impact on the perception and interpretation of the graph, according to the findings. Respondents to the study who were presented with bars perceived the value below the mean as more likely than the value above the mean (5.26, 1.82, and 2.62, 1.62 representing the means and standard deviations, respectively), p0.001, a paired t = 14.06, and a (d) value of 1.40 [1.15, 1.62] when presented with the bars. Tendency to move in the same direction as another: 

Quantity

Mean

Standard deviation

Judgment below (Tables)

4.25

2.05

Judgment above (Tables)

3.01

1.60

Judgement below (dot plot)

4.41

2.21

Judgement above (dot plot)

2.72

1.67

Quantity

Β

T

P

Bias in the table

-0.20

4.42

<0.001

Bias in dot plot

-0.17

4.42

<0.001

Graph literacy scores

0.22

3.23

0.001

The experiment had a variance of 0.05, F (5, 605) = 6.57, p .001, and the bias in the bars condition was higher than the bias in the tables condition, indicating that the bars condition was more accurate. The use of the dot plot in the experiment resulted in a significant reduction in the amount of bias that was observed. 

The relationship between bias and graph literacy was only noticeable in the bars condition, which included the following conditions:

Table 6 shows the results of the survey. 

Condition

R

P

Bars

0.22

0.001

Tables

-0.01

0.86

Dot plot

-0.04

0.58

This link was shown to be an indicator that persons who are graph literate had a greater amount of within-bar bias. The following were the results of the examination of the materials:

Materials

Mean

SD

T

p

D

Bar graphs

4.95

1.41

Tables

4.56

1.46

t (407) = 2.74

p = .01

d = .27 [.08, .47]

Dot plots

4.58

1.43

(402) = 2.65

p = .01

d = .26 [.07, .46]

According to the findings of the literature research, bar graphs were judged more favourably than tables and dot graphs, despite the fact that tables and dot graphs showed a lower degree of bias. 

The two experiments found a relationship between bias in judgement of bar graphs that depicts mean. According to the experiments, when people are provided with graphs that depict mean values, a significant proportion will make biased decisions, showing a high degree of decision vulnerabilities. The findings of this study indicate that bias in perception and interpretation of graphical information can have huge impacts on issues such as healthcare and public administration. This would be the case if people in decision-making were biased in their interpretation of graphical information. Perhaps what came as a surprise in the two experiments is that graph literate people had a higher rate of within the bar bias. This finding comes as a surprise because the expectation is that graph literacy should reduce bias and improve the accuracy of interpretation (Okan et al., 2016). The bias in interpretation of graphical data could have far bigger implications than projected by this study given the widespread application of bar graphs in visualization. However, the solution to this data visualization challenge is replacing the bar graphs with other graphs based on findings from this study.

The bias experienced when interpreting graphical data was explained by Newman and Scholl (2012) who associated it with the physical attribute of the bar graph. The bar graph is designed to display an enclosed region with each bar starting from a particular axis. Because of its boundaries and an enclosed region visualized using a different shading compared to other regions of the graph, most people presume that this enclosed region houses all the required data points within a given distribution. This presumption is biased because people tend to underestimate how far the individual values are from the mean.

When people are presented with graphs with raising bars, the bias is demonstrated by systematic underestimation of the mean values regardless of how tall the bar is (Godau et al., 2016). This research suspects that the bias in perception is caused by the basic principles of object perception, especially when the object is attached to an axis such as the x-axis. If this holds, factors such as how the graph is formatted can influence how such graphs are perceived. Ali & Peebles, 2013 note that there is a possibility that bar graphs are systematically misinterpreted but they can also help decision-makers in making discrete comparison between different data points.

Some researchers have tried to explain the bias phenomenon through cognitive research and verbal protocol analysis and eye-tracking. The results of such research could prove useful in explaining how the use of a dot plot reduced the level of bias. Godau et al., 2016 proposed that eye-tracking can help determine whether people’s attention was on the dots of the spaces in between. Also, it can be used to determine how the difference in paying attention to the graph affects how the graph is interpreted. Process tracing research also can provide useful insights on how people understand and perceive errors. Future researchers can investigate how bias in perception and interpretation of graphical data influences real-world decisions for decision-makers in business and public administration.

This research paper suggests that graphical bias in interpretation and perception may exhibit a curvilinear property. People with the highest level of graph literacy are expected to have the lowest level of bias. The declined bias by such experts is expected because of their high skills in interpreting error bars.

Thirty-eight participants completed an online questionnaire developed for this research project, which took less than 10 minutes. The research project collected the results. This poll is being conducted to determine the accuracy with which various graphs are being interpreted from the perspectives of different persons. We received 30 affirmative replies; all of the remaining responses were either or neither of the two options. All of the questions were basic and straightforward, involving the interpretation of various graphs, the level of one’s knowledge of graphs, and some background questions about the participants’ backgrounds. In the comments section, a few people expressed dissatisfaction with the poor design of the questions and difficulties in accessing the form, and the clarity of the images. According to the findings of the study, the overall outcome was favorable.

Based on the study results, we can infer that most participants who use graphs daily are pretty competent at reading different charts, with bar graphs having the highest success rate. Even those who do not frequently utilize diagrams in their everyday lives gave good responses. Furthermore, as demonstrated in both trials, the time to read bar graphs is significantly shorter than the time required to interpret other charts, such as error bars. Because of the programs utilized, the graphs displayed in the survey should have been improved; there was a lack of clarity and a poor display.

Finally, according to the comments received, participants had a great time taking part in the survey, which took roughly 8 minutes to complete on a computer or mobile device. Because there was minimal difficulty comprehending the questions and images, just a few participants were quick, and only a few took longer than usual.

Conclusion

Systematic bias in graphical bars used to represent distribution means has been demonstrated in this study and is discussed further in the paper. Because of cognitive generalizations of the fundamental principles of object perception, there is a bias in object perception. According to the findings of this article, such biases enhance the likelihood of misunderstanding and mistakes in judgment. The mistakes that result from this bias have the potential to have catastrophic consequences for policy decision-making, particularly in the healthcare business. This study also discovered some indication that having some graph literacy abilities may enhance the likelihood of experiencing the consequences of having inadequate skills. Therefore, those with expert abilities are less impacted by such prejudice than people with ordinarily adequate levels of expertise.

The nature of bias in understanding graphical bias is so severe that it may damage graphs with an excellent graph design. Therefore, bias in interpretation of typical bar graphs will continue to exist regardless of how nicely the graph is constructed. The purpose of this study was to determine the extent to which the use of shapes other than rectangles causes misperception of data and/or increases the time required to interpret data in commonly occurring graph types by measuring the amount of time required to interpret data when using shapes other than rectangles (Ancker et al., 2021). According to the findings of both experiments 1 and 2, rectangular bar graphs, regardless of how well they are drawn, produce a within-bar bias when used to represent the mean of distributions when such graphs are used to represent the mean of distributions.

It has not been possible to achieve significant results in reducing bias and people’s perception when interpreting graphs despite the efforts of graph and visualization designers to produce better graphs. In this study, the researchers discovered that utilizing graphs constructed in various formats, such as irregular shapes, there is potential for lowering this bias. This, in turn, increases the efficiency of decision making. People’s judgements concerning visualized information provided in graphical form have been the subject of much research, and this study contributes to that body of knowledge. Additionally, this study gives evidence that certain judgments made on the basis of graphical materials may be erred in some way. This implies that decision-makers should be subjected to tests to determine whether or not they are biased in their interpretation of graphical information.

There are a few routes for furthering the development of this field of study in order to obtain a better grasp of the topic. Because of the data-driven nature of the selection of visualizations, various kinds of visualizations and connections were eliminated from the experiment as a consequence of this. For example, correlations, scatter plots (Few, 2012) and charts linked to organizations such as network diagrams, hierarchical charts, and flow charts are all examples of correlations (Berinato, 2016). Similarly, a number of concepts were absent from the visualizations employed since they were deemed to be inapplicable. In the same way, similar studies employing appropriate data to construct these missing visualizations might help us study their usage and performance in real-world situations. Second, it may be able to take the findings of this and other research and utilize them in conjunction with statistical and data mining technologies to develop model-based applications. In the future, these modules might possibly instruct visualization designers and analysts on the sorts of visualizations to employ in different situations, depending on the features of the data, the users, and the target audiences.

References 

Adams, J., Khan, H. T. & Raeside, R., 2014. Research Methods for Business and Social Science Students. 2nd Ed. ed. London: Sage Publications Ltd. 

Andrews, E., 2013. Who Invented the Internet? [Online] Available at: https://www.history.com/news/who-invented-the-internet. 

Berinato, Scott (June 2016). “Visualizations That Really Work”. Harvard Business Review: 92–100. 

Boy, J., Rensink, R. A., Bertini, E. & Fekete, J.-D., 2014. A Principled Way of Assessing Visualization Literacy. IEEE Transactions on Visualization and Computer Graphics, 20(12), pp. 1963 – 1972. 

Borkin, M. A. et al., 2013. What Makes a Visualization Memorable? IEEE Transactions on Visualization and Computer Graphics, 19(12), pp. 2306 – 2315. 

Bishop, I. D., Pettit, C. J., Sheth, F. & Sharma, S., 2013. Evaluation of data visualisation options for land-use policy and decision making in response to climate change. Environment and Planning B: Planning and Design, 40(2), p. 213–233. 

Callegaro, M., Manfreda, K. L. & Vehovar, V., 2015. Web Survey Methodology. London: Sage Publications Ltd. 

Cleveland, W. S. & McGill, R., 1984. Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods. Journal of the American Statistical Association, 79(387), pp. 531-554. 

Carpendale, S., 2008. Evaluating Information Visualizations. A. Kerren et al. (Eds.): Information Visualization, Volume LNCS 4950, pp. 19-45. 

De Bruijne, M. & Wijnant, A., 2013. Comparing survey results obtained via mobile devices and computers: an experiment with a mobile web survey on a heterogeneous group of mobile devices versus a computer-assisted web survey.. Social Science Computer Review, 31(4), pp. 482-504. 

Department for Communities and Local Government, 2009. DataViz: improving data visualisation for the public sector. 

Dilla, W., Janvrin, D. J. & Raschke, R., 2010. Interactive Data Visualization: New Directions for Accounting Information Systems Research. JOURNAL OF INFORMATION SYSTEMS, 24(2), pp. 1- 37.

Dawson, C. (2019). Introduction to research methods (TIFFIN MAIN Q180.55.M4 D36 2019; Fifth edition.). Robinson. 

Few, S., 2012. Show Me The Numbers. 2nd ed. Burlingame, CA: Analytics Press. Few, S., 2012. Show me the Numbers: Designing Tables and Graphs to Enlighten. 2nd ed. ed. Burlingame, CA: Analytics Press. 

Few, S., 2013. Information Dashboard Design. 2nd Ed. ed. Burlingame, CA: Analytics Press. 

Floyd J. Fowler, J., 2002. Survey Research Methods. 3rd Ed. ed. London: Sage Publications Ltd. 

Few, S., 2016. When Are 100% Stacked Bar Graphs Useful? [Online] Available at: https://www.perceptualedge.com/blog/?p=2239

Few, S., 2019. A Design Problem. [Online] Available at: https://www.perceptualedge.com/example3.php 

IBM, 2019. The IBM PC’s debut. [Online] Available at: https://www.ibm.com/ibm/history/exhibits/pc25/pc25_intro.html.

Jenkins-Smith, H. C., Ripberger, J. T., Copeland, G., Nowlin, M. C., Hughes, T., Fister, A. L., & Wehde, W. (2017). Quantitative research methods for political science, public policy and public administration: 3rd edition with applications in R. 259. 

Knaflic, C. N., 2015. Storytelling with Data. 1st Ed. ed. Hoboken, New Jersey: John Wiley & Sons Inc. 

Knaflic, C. N., 2018. #SWDchallenge: slopegraph. [Online] Available at: https://www.storytellingwithdata.com/blog/2018/6/1/swdchallenge-slopegraph. 

Lam, H. et al., 2012. Empirical Studies in Information Visualization: Seven Scenarios. IEEE Transactions on Visualization and Computer Graphics, Institute of Electrical and Electronics Engineers, 18(9), pp. 1520-1536.  

Labaree, R. V. (n.d.). Research guides: Organizing your social sciences research paper: Quantitative methods [Research Guide]. Retrieved November 10, 2020, from https://libguides.usc.edu/writingguide/quantitative 

Mavletova, A. & Couper, M. P., 2016. Device use in web surveys. The effect of differential incentives. International Journal of Market Research, 25(4), pp. 523-544. 

Morse, E., Lewis, M. & Olsen, K. A., 2000. Evaluating visualizations: using a taxonomic guide. Int. J. Human-Computer Studies, Volume 53, pp. 637-662. 

Newell, R., Dale, A. & Winters, C., 2016. A picture is worth a thousand data points: Exploring visualizations as tools for connecting the public to climate change research. Cogent Social Sciences, 2(1), pp. 1-22. 

Ofcom, 2015. The UK is now a smartphone society. [Online] Available at: https://www.ofcom.org.uk/about-ofcom/latest/media/media-releases/2015/cmr-uk-2015. 

Ofcom, 2018. Pricing trends for communications services in the UK, London: Ofcom. 

Ofcom, 2019. Connected Nations Update Spring 2019, London: Ofcom 

Opi?a, J., 2019. Role of Visualization in a Knowledge Transfer Process. Business Systems Research, 10(1), pp. 164-179. 

Playfair, W., 1786. The Commerical and Political Atlas. 1st Ed ed. London: J. Debrett. 

Prion, S., & Adamson, K. A. (2013). Making ense of methods and measurement: Levels of measurement for quantitative research. Clinical Simulation in Nursing, 9(1), e35–e36. https://doi.org/10.1016/j.ecns.2012.10.001 

Revilla, M. & Ochoa, C., 2017. Ideal and maximum length for a web survey. International Journal of Market Research, 59(5), pp. 557-565. 

Rensink, R. A. & Baldridge, G., 2010. The Perception of Correlation in Scatterplots. Computer Graphics Forum, 29(3), pp. 1203-1210. 

Shewan, Dan (5 October 2016). “Data is Beautiful: 7 Data Visualization Tools for Digital Marketers”. Business2Community.com. Archived from the original on 12 November 2016. 

Stowers, G., 2013. The Use of Data Visualization in Government. [Online] Available at: https://www.businessofgovernment.org/sites/default/files/The%20Use%20of%20Visualization%20in% 20Government.pdf. 

Shamim, A., Balakrishnan, V. & Tahir, M., 2015. Evaluation of opinion visualization techniques. Information Visualization, 14(4), p. 339–358. 

Spence, I., 2005. No Humble Pie: The Origins and Usage of a Statistical Chart. Journal of Educational and Behavioral Statistics, 30(4), p. 353–368.

Singh, K. (2007). Quantitative social research methods. Sage Publications Pvt. Ltd. https://login.tu.opal-libraries.org/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=e000xna&AN=278170&site=eds-live&scope=site 

University of Colorado-Denver, Grant, C., Osanloo, A., & New Mexico State University. (2014). Understanding, selecting, and integrating a theoretical framework in dissertation research: Creating the blueprint for your “house.” Administrative Issues Journal Education Practice and Research, 4(2). https://doi.org/10.5929/2014.4.2.9 

The Art of Consequences, 2018. DESIGNING WAR-CARE: “DIAGRAM OF THE CAUSES OF MORTALITY IN THE ARMY OF THE EAST”. [Online] Available at: https://edspace.american.edu/visualwar/nightingale/#top. 

Tufte, E. R., 1990. Envisioning Information. 10th Ed. ed. Cheshire, Connecticut: Graphics Press. 

Tufte, E. R., 2001. The Visual Display of Quantitative Information. 2nd ed. Cheshire, Connecticut: Graphics Press. 

Tufte, E. R., 2006. Beautiful Evidence. 1st Ed. ed. Cheshire, Connecticut: Graphics Press. 

Visually, 2019. History of Data Visualization. [Online] Available at: https://visual.ly/m/history-of-data-visualization/ 

Wexler, S., Shaffer, J. & Cotgreave, A., 2017. The Big Book of Dashboards. 1st Ed. ed. Hoboken, New Jersey: John Wiley & Sons Inc.

Wexler, S., Shaffer, J. & Cotgreave, A., 2017. The Big Book of Dashboards. Hoboken, New Jersey: John Wiley & Sons, Inc. 

Wakeling, S., Clough, P., Wyper, J. & Balmain, A., 2015. Graph Literacy and Business Intelligence: Investigating User Understanding of Dashboard Data Visualizations. Business Intelligence Journal, 20(4), pp. 8-19. 

Williams-McBean, C. T., 2019. The Value of a Qualitative Pilot Study in a Multi- Phase Mixed Methods Research. The Qualitative Report, 24(5), pp. 1055-1064. 

Viegas, Fernanda; Wattenberg, Martin (April 19, 2011). “How To Make Data Look Sexy”. CNN.com.