10 Suggestions for Making an Effective Poster


Written papers are the traditional way to share research results at professional meetings, but poster sessions have been gaining popularity in many fields. Posters are particularly effective for sharing quantitative data, as they provide a good format for presenting data visualizations and allow readers to peruse the information at leisure.  For students they are a great teaching tool, as preparing a good poster also requires clear and concise writing.

Making a poster is easy, but making a really good poster is hard.  I have found the guidelines below helpful to students.  The most important piece of advice, however, is the one true for all writing—write, read and revise; write, read and revise; write, read and revise!

  1. Make your poster using PowerPoint. This will allow you to put in text via text boxes as well as to paste in charts, graphs, tables, maps, and pictures.  It is easy! To get your pictures and text boxes to line up consistently, use snap to grid.  In the Format tab choose Arrange>>Align and then Grid Setting. Select to view the grid and to snap to the grid.  You can set the grid size here as well.
  1. Use a single slide. In the Design Tab pick Page Setup, select custom, and then set the width and height to maximize your slide, given the locally-available paper size. At Grinnell the paper width available is 36”, so we set the width to 45” and the height to 36”.  Use “landscape” for your orientation.
  1. As in a written paper, have a descriptive title. Put the title (in 68 point type or larger) at the top of the poster.  Place your name and college affiliation in slightly smaller type immediately below it.
  1. The exact sections of the poster will vary some depending on the project, but include an abstract placed either under the title or in the upper left column.
  1. As in a written paper, be sure you have a good thesis and present it early in the poster, support it with evidence, then remind your audience of it as you conclude. Finish with a minimum of citations and acknowledgements in the lower right hand corner.
  1. Posters should read sequentially from the upper left, down the left column, then down the central column (if you have one) and finally down the right column. Alternative layouts are possible, but the order in which the poster is read must be obvious.
  1. Use a large font–a minimum of 28 point.
  1. Limit the number of words. Be concise and think of much of your text as captions for illustrations.
  1. Use lots of charts, graphs, maps, and other pictures. Be sure to label your figures and refer to them in the text.
  1. Make your poster attractive. Use color.  Pay attention to layout.  Do not have large empty areas.

Enter your e-mail address to receive notifications of new blog posts.
You can leave the list at any time. Removal instructions are included in each message.

Powered by WPNewsman


Please like & share:

Throwback Thursday: Big Data in the Early 20th Century

Last week, we talked about the 1888 invention of one of the first tools that could be used to process “big data,” the Hollerith Machine. A fascinating book published in 1935, Practical Applications of the Punched Card Method in Colleges and Universities, records some of the “big data” research that academics undertook using this new technology, including an effort by an anthropology professor at Harvard University to determine precise anatomical profiles for various classes of criminals. He and his research team recorded information about 125 biometric variables for 17,000 criminals, then used Hollerith machines to look for correlations in the data. I’ll let Prof. E. A. Hooton tell you more about this in his own words:

In the course of elaborating our criminal data, one process was performed by the Hollerith sorter which in its complexity is probably unique in anthropometric research. In our series of native white criminals of native parentage there is included a group of 414 robbers. These robbers display as a group a number of statistically significant excesses and deficiencies of certain categories of morphological features…. It was desired to ascertain how many individual robbers manifested each one of every mathematically possible combination of these nine morphological peculiarities. Since there are 512 possible combinations of the presence and absence of these characters, the sorting task involved was stupendous and consumed several weeks of the entire working time of the sorter…. The outcome of the research was a conclusive demonstration that, by taking a sufficient number of peculiarities of the robber group in combination and selecting all of the individuals who possessed that combination, it was possible to pick out a type which was 100 per cent robber. At the same time it was demonstrated that only one robber out of 414 showed this complete and exclusive type combination. It was therefore apparent that morphological type combinations were of no practical use in determining the offenses of criminals, so far as our particular data were concerned.(1)

While this project is notable as much for how ridiculous its premise sounds to us 80 years later as for the scale of its undertaking, other chapters in the book record efforts that would not be out of place today, such as attempts to code information about large numbers of hospital patients in an effort to learn more about the causes of mortality, or a survey of over 30,000 businesses in three states to gauge the impact of newly-imposed sales taxes. I’ll let Edwin H. Spengler, author of the latter chapter, conclude with a statement that, with only minor changes, could easily appear in any modern work on “big data”:

Much as the compilation of certain statistical data may be desired, however, the expense and the time involved in sorting and tabulating the information, have frequently deterred individuals from going ahead with a given project…. To a large extent, the introduction of mechanical methods of counting, sorting and tabulating numerical facts has eliminated these difficulties. Electric machinery, capable of performing routine operations at the rate of several hundred per minute, has increased the speed and lowered the expense of preparing statistical tabulations. This has resulted in a broadening of the field of statistical research and analysis and has stimulated the projection of studies which, without the use of such equipment, would no doubt have been considered impossible or impractical of accomplishment. (2)

What would Spengler and his colleagues have thought about today’s supercomputers, which can perform more than 1012 operations per second? And what will researchers 80 years from now view as quaint when looking back at our “big data” research?

(1) E. A. Hooton, “Anthropology,” in G. W. Baehne, editor, Practical Applications of the Punched Card Method in Colleges and Universities. New York: Columbia University Press, 1935, p. 387.
(2) Edwin H. Spengler, “Economics,” in G. W. Baehne, editor, Practical Applications of the Punched Card Method in Colleges and Universities. New York: Columbia University Press, 1935, pp. 397-8.

Enter your e-mail address to receive notifications of new blog posts.
You can leave the list at any time. Removal instructions are included in each message.

Powered by WPNewsman

Please like & share:

Throwback Thursday: Big Data in the Late 19th Century

“Big data” is one of the buzzwords of 21st century research. In the sciences, it has been the subject of a special issue of Nature; in the social sciences and humanities, the National Endowment for the Humanities has sponsored a “Digging into Data Challenge” to encourage “big data” research in these fields. Reports on the impact of big data on research have been written by everyone from the Council on Library and Information Resources to Microsoft Research. Many of these pieces of writing emphasize the unprecedented ability that ever-more-powerful computers have given us to collect and analyze massive quantities of data.

But tools for working with “big data” long precede the invention of the modern, integrated-circuit-based computer*. The Hollerith Machine, a “computer” that could rapidly tabulate information recorded on punched cards, was invented in 1888 to solve a pressing big data problem of the day: how to tabulate the Decennial Census data gathered from the U.S.’s rapidly-growing population. This sort of punched card technology was used to process Census data for fifty years.

Hollerith Census Machine pantograph
You can read more about how the Hollerith Machine worked in the “History” section of the U.S. Census Bureau’s website.

c.1900 Hollerith Census Tabulator

Not long after the Hollerith Machine was invented, academic researchers were considering how to apply this new tool and its successors to their own “big data” research problems. In our next Throwback Thursday post, we’ll look at some research from 1935 that used punchcards to analyze “big data.”

*Invented by Grinnell alumnus Robert Noyce.

Images from flickr users Marcin Wichery and Erik Pitti respectively, with no changes made, under the Creative Commons License 2.0.

Enter your e-mail address to receive notifications of new blog posts.
You can leave the list at any time. Removal instructions are included in each message.

Powered by WPNewsman


Please like & share:

Access to Research Includes Access to Data!

In February 2013, the Director of the White House Office of Science and Technology Policy (OSTP) issued a memorandum to all agency and department heads entitled, “Increasing Access to the Results of Federally Funded Scientific Research”.

The memo directed federal agencies that award more than $100 million in research grants to develop plans for increasing public access to peer-reviewed scientific publications. It also requires researchers to better account for and manage the digital data resulting from their federally funded research. (At the same time, the OSTP directive acknowledges that access to some data needs to be controlled to protect human privacy, confidentiality of business secrets, intellectual property interests, and other reasons.)

The OSTP recognizes that research data are valuable and need to be preserved. Increased public access to data – along with better access to the published literature – is fundamental to research, and permits

  • more thorough critiques of theories and interpretations, including replication of research results,
  • scholarly innovation that builds on past work, and
  • practical application of scholarly discoveries.

Continue reading →

Please like & share: