Asked and Answered: Check All That Apply Survey Questions

This blog post is part of a series called Asked and Answered, about writing great survey questions and visualizing the results with high impact graphs. Dr. Sheila B. Robinson is authoring the Asked series, on writing great questions. Dr. Stephanie Evergreen is authoring the Answered series, on data visualization. View the Answered counterpart to this post on Dr. Evergreen’s website.

Asked & Answered check all that apply featured image

Let’s talk about check all that apply survey questions. I’ll admit up front, they’re not my favorite.

I love ice cream. As a respondent, the survey question below would be really easy for me to answer. I’d simply check them all. (It’s not the same with vegetables. I still can’t stand asparagus.)

Check all that apply question about ice cream flavors

Check all that apply (CATA) survey questions are easy to compose. Just ask a question, list a bunch of options, and let people know they can choose as many as they like, right? The thing is, problems can easily hide in these types of survey questions. But first…

Do your homework!

The key to designing quality survey questions (of any type) is like mise en place for a chef – have everything ready and in its place before you begin. For your survey to yield the best data you will need to have these “ingredients” ready:

1.) One or more research or evaluation questions. I call these the “big” questions to distinguish them from the questions we ask on surveys or in interviews.

2.) A clearly articulated purpose for using a survey. You need a clear picture of how a survey will help answer the big questions.

3.) An understanding of what you need to measure to answer your big questions.

4.) Knowledge about your respondents. Have an idea of who they are, what they know, and what kinds of questions they are capable and willing to answer.

Problems with CATA

Let’s start with the example above. If I check all of the response options, you won’t learn very much about me except that I really like ice cream or that I’m not that fussy about flavors or perhaps even both – and you won’t know which it really is. CATA questions don’t always yield actionable insights.

Secondly, CATA questions can be tricky to analyze. One way to think about analysis is to treat each response option as if it was a dichotomous – YES or NO – question. How many respondents checked chocolate? How many checked vanilla? And so on…

Also problematic is that research* has confirmed (and we’ve all probably experienced!) that survey respondents don’t always read all the choices carefully, especially if the list is long. This means that if a respondent leaves a box unchecked, you won’t know if the person:

  1.  simply skimmed past and really didn’t consider that option,
  2. was unsure if the option applied to them, or
  3.  if that option truly didn’t apply to that respondent.

Alternatives to CATA

While there’s no way to ensure a respondent reads and considers each option on the list, there are alternatives to check all that apply survey questions. One is to take each response option and make it a dichotomous question – that is, a question with two response options, usually YES or NO. This essentially asks respondents to read and mark a response one way or the other to each choice.

Do you enjoy these ice cream flavors as a series of YES or NO dichotomous questions

Dichotomous isn’t the only alternative, however. Returning to your research/evaluation question, your purpose for the survey, and what you need to measure, you may consider asking people to what extent they like or dislike each flavor. Then, offer a set of response options such as “like very much”, “like a little”, “dislike a little”, and “dislike a lot.” Or simply “like it” “don’t like it”, and “no opinion.” The specific response options used will be a result of your own deep thinking about what exactly you want to measure and what you hope to learn from the data. It’s important to mention that research* on survey question formats has shown that respondents typically endorse more options in these formats than in a CATA format, but it’s not necessarily clear as to whether accuracy is improved with these formats.

Up Next in the Series…

Perhaps you need to know which ice cream flavors people like best and least – something that calls for a ranking question. You’re in luck… look for a blog on ranking questions coming soon!

In general, I encourage avoiding CATA questions unless you are confident they will yield useful data, AND you have a solid plan for analysis. If this is the case, go for it!

Pro Tips:

  • Consider alternative question format (such as dichotomous or rating scale)
  • Mutually exclusive responses (necessary for ANY multiple choice style survey question!)
  • Limit number of response options (or people will fatigue and skim)
  • Randomize response option orderif possible (this mitigates the possibility of the primacy effect- people choosing what they see first)

* See for example: When Online Survey Respondents Only Select Some That Apply; Comparing Check-All and Forced-Choice Question Formats in Web Surveys and Yes–no answers versus check-all in self-administered modes: A systematic review and analyses

In the other posts in our Asked and Answered series, we provide options for Rating Scale questions, Ranking questions, and Demographics.

See you soon.

We go into way more detail on these topics in our books. Dr. Sheila B. Robinson is co-author of Designing Quality Survey Questions. Dr. Stephanie Evergreen wrote Effective Data Visualization.

Asked and Answered: Rating Scale Survey Questions

This blog post is part of a series called Asked and Answered, about writing great survey questions and visualizing the results with high impact graphs. Dr. Sheila B. Robinson is authoring the Asked series, on writing great questions. Dr. Stephanie Evergreen is authoring the Answered series, on data visualization. View the Answered counterpart to this post on Dr. Evergreen’s website.

Asked & Answered rating scales featured image

Rating scale questions are ubiquitous. I can hardly imagine a survey without at least one. They are typically posed as multiple choice (one answer only) questions composed of a question stem, and a set of response options. Like this:

Question stem and response options

Rating scale questions are generally easy to compose but tricky to get just right. A question like the example above may yield some insight, but not actionable insight. We might know the proportion of customers who are satisfied or dissatisfied, but we wouldn’t know why or what to do about it. To get actionable insights, we need to ask a different question, or at least additional questions. But, before even considering what insights you need…

Do your homework!

The key to designing quality survey questions (of any type) is like mise en place for a chef – have everything ready and in its place before you begin. For your survey to yield the best data you will need to have these “ingredients” ready:

1.) One or more research or evaluation questions. I call these the “big” questions to distinguish them from the questions we ask on surveys or in interviews.

2.) A clearly articulated purpose for using a survey. You need a clear picture of how a survey will help answer the big questions.

3.) An understanding of what you need to measure to answer your big questions.

4.) Knowledge about your respondents. Have an idea of who they are, what they know, and what kinds of questions they are capable and willing to answer.

Rating scale questions often appear as Agree-Disagree or AD scales, or variations of AD scales such as the example above – using degrees of “satisfaction” instead of agreement. If we’re not using AD, we’re using IS or “item-specific” wording. Writing item-specific response options can be a bit more challenging and it’s easy to get tripped up by wording, but the upside is that these questions often prove even more informative and useful than AD scale questions.

Response options MUST match question stems!

If the question asks “how useful” something is, the response options (answer choices) should feature degrees of “usefulness.”

Example wrong and right way to match question stem and response options

Here are a few examples of mismatches from real surveys:

Examples of real survey questions with mismatched question stems and response options

Odds or Evens?

Rating scales typically feature four or five points, the difference being whether you choose to include a “neither agree nor disagree” or similar option as a midpoint. The “odds or evens” midpoint debate has been ongoing since the dawn of surveys and people on both sides of the issue feel very strongly about it.

Here’s the thing:

There is no right or wrong answer because it’s not an “either-or” debate.

The way to determine whether you need a midpoint is to think deeply about what you need to measure, and about your respondents. Given the topic you’re asking about, is it reasonable to think that someone may not have an opinion, that they genuinely feel “neutral” and have no leanings toward positive or negative? Then choose an odd number of response options (so that there is a midpoint).

Scenario #1: Someone once asked my opinion of my school superintendent. I truly had no feelings about her. She was new. I never had contact with her, didn’t have a sense of what was important to her, and didn’t know much about her past performance. I truly didn’t lean toward a positive or negative opinion.

Scenario #2: If I were to ask a set of respondents how satisfied or unsatisfied they were with the place they currently live, I would expect virtually all of them to have either positive or negative feelings, and so would choose a 4 or 6 point scale for that question, forcing them to choose a positive or negative leaning response.

How many scale points you use is dependent on your specific context. In general, 4-7 points work well, and up to 11 is fine too. Net promoter survey questions are wildly popular 11-point scales used by tons of companies and other entities.

N/A is NOT a Midpoint

Keep in mind that response options such as “don’t know” or “not applicable” are not midpoints. If it’s reasonable to think that some respondents may genuinely need these non-substantive options, they should be placed at the far end of the scale and visually separated from the other options so as to not create a visual midpoint.

Pro Tips:

  • Match question stem to responses
  • Offer 4-7 response options/scale points (but up to 11 is fine too)
  • Least desirable response first (this mitigates the possibility of the primacy effect- people choosing what they see first)
  • Odd or even number of scale points? (Depends on context!)

In the other posts in our Asked and Answered series, we provide options for Check All That Apply questions, Ranking questions, and Demographics.

See you soon.

We go into way more detail on these topics in our books. Dr. Sheila B. Robinson is co-author of Designing Quality Survey Questions. Dr. Stephanie Evergreen wrote Effective Data Visualization.

Net Promoter Survey Questions: I’m very likely to recommend you read this.

Have you ever encountered a survey question that asks, “How likely are you to recommend {name of business} to a friend? You probably found an 11 point (0-10) scale that accompanied the question with the endpoints labeled “not at all” and “extremely likely.”

Net promoter questions asking "How likely are you to recommend..." showing 11 point scale

This question is associated with a concept called “net promoter,” and many large corporations are using this as a key metric in lieu of asking a series of more traditional customer satisfaction survey questions. Net promoter (NP) survey questions are designed to measure customer experience and even predict business growth.

 

Respondents who rate an NP survey question highly — with a 9 or 10 — are called “promoters,” while those who rate from 0-6 are called “detractors.” Those who rate a 7 or 8 are considered “passives.” Some companies will work very hard to try to change detractors to promoters while others focus efforts on moving passives to promoters.

 

Promoters are loyal customers likely to return and speak highly of the product, service, or business. Passives aren’t as enthusiastic and may easily be lured away by competitors. Detractors (as I’m sure you can guess) can potentially damage business by sharing their negative experiences and perceptions of the product or service.

Net Promoter example from United Airlines survey

 

 

NP example from United Airlines

In what seems the zenith of social media, the idea of identifying those who might promote your brand through social sharing channels, and those who might damage it with similar strategies, makes a ton of sense. There’s no shortage of support for employing this “likely to recommend” question in surveys — after all, a large number of Fortune 1000 companies use it — one of which is identifying customer loyalty with a very simple question. The originators of the concept found it worked especially well for mature, competitive industries. But there are also good reasons to consider whether this is the best option for your survey, or whether using differently worded questions might yield better data, especially with certain respondent populations.

 

My book, Designing Quality Survey Questions, features a number of Real World Questions culled from actual surveys my co-author and I have encountered over the years, and Stories from the Field collected from other researchers and evaluators who shared lessons learned from their survey design experiences. Along with continuing to explore current research on survey design, I keep my eyes and ears open for survey experiences people share with me because these too shape my thinking about quality survey design.

Net promoter survey question from HP Connected

 

 

NP example from HP Connected

Here are a couple of recent examples — real stories, from real people — on this idea of net promoter survey questions:

 

Story #1: Using NP with children

Prior to our podcast interview, I was chatting with Rebecca from Glass Frog who asked my opinion on net promoter questions and told me a story of her nine-year-old daughter encountering this “likely to recommend” question on a survey about her Girl Scout troop. Rebecca’s daughter provided very favorable responses about her experience with the Girl Scouts. However, when asked whether she would recommend this troop to others, she responded with a 0 on the net promoter question and told her mom, “No. I really like our troop the way it is. I wouldn’t recommend it to anyone else, because I don’t want anyone else to join.” The young girl took the question quite literally and answered honestly. But hey, kids do that, and we know that surveying children has its unique challenges, right?

Story #2: Using NP with seniors

Given my interest in survey design and knowing I was working on the book, my dad, an octogenarian, would occasionally talk with me about survey questions he came across. One day he mentioned the “would you recommend” question he saw on a survey from his insurance company. He said he was troubled by the question, not because he was dissatisfied with the company, but rather because he didn’t see himself as ever having a conversation with anyone about insurance companies. To him, insurance is a private matter, not to be discussed with others, and thus, he felt he simply wouldn’t be in a situation to recommend or not recommend the company. Taking the question quite literally, he didn’t quite know how to answer it. He understood the intent, but was troubled by the particular wording, “would you recommend.”. He’s quite a literal guy, and wanted to answer honestly. He asked me, “Why don’t they just ask me if I’m satisfied with their service? That one I can answer easily.”

Net promoter survey question from a local spa

 

 

NP example from a local spa

Lesson Learned:

So here we have it: from eight to eighty, this question is clearly not for everyone! Now, I’m not saying it should never be used. After all, there’s plenty of evidence for the value of the net promoter question to many, many successful companies. My advice is this:

  • Carefully consider who your respondents are and how they’re likely to interpret and answer this question;
  • Revisit the purpose of your survey (you did articulate a clear purpose in your research or evaluation plan before designing your survey, right?);
  • Have a crystal clear understanding of what it is you want to measure — is it customer satisfaction? Is it brand loyalty? Something else?;
  • Based on the above, determine what question or set of questions is likely to yield rich, actionable data that will answer your specific research or evaluation question(s); and finally
  • Pretest the questions! Use a small-scale pilot test, cognitive interviewing strategies, or other pretesting techniques (all conveniently explained in Ch. 7 of DQSQ!).
  • Carefully consider the wording of the main NP question with your respondents in mind:
    • People will interpret these questions in different ways:

How likely are you to recommend [organization name] to a friend, family member, or colleague?

How willing would you be to recommend [organization name] to a friend, family member, or colleague?

  • You can also vary the end of the question – “friend, family member or colleague” could simply become “others.” More concise question wording = less cognitive load for respondents and reduces the likelihood of survey fatigue.
  • If you use an NP question, follow it up with an open-ended “Why?” question. There’s always a risk that respondents will skip the question or provide non-substantive responses (e.g., “because that’s the way I feel”) but in sifting through these in analysis, you may come across some true gems in the form of highly insightful answers.

 

NP in the NP world: No easy answers

Non-profits have been experimenting with net promoter survey questions and learning how they can best use them to inform their work. Feedback Labs offers good advice for those wanting to experiment with NP questions including how often to ask, and how to use the data in different ways than corporations do. Others are encouraged by early success with this approach in the social sector, but that said, not all non-profits are in favor of the NP question approach, and this perspective should be taken into account as well.

 

Tinker, tinker, tinker…

If you’ve read my book or previous articles on survey design you know I’m a huge fan of experimenting with question wording, and a staunch believer that words matter, word choice matters, word order matters, and what you do and don’t ask matters. This is true whether we’re talking about surveys or conversations. After all, how many times have you responded to someone angry or hurt with “but I didn’t mean it like that!”?. If you’re going to try an NP approach, go for it! If you have the opportunity to pre-test with a sample of potential respondents, try some A/B testing, using different versions of the question with random samples of your pre-test respondents. And let me know what you learn!

 

Many thanks to Chelsea BaileyShea of Compass Evaluation + Consulting, LLC for her generous and insightful feedback on an early draft of this article, and to Rebecca Casciano of Glass Frog for contributing her story.

New Year, New Newsletter: What is Professional Learning?

Happy New Year! Each year, just like many of you, I make… and usually break… the same resolutions, with the exception of one: I learn.

In 2018, I learned how to create and launch my new website. That year, I also learned more about educational equity and culturally responsive education, communication, and leadership. In 2019, I studied negotiation skills, learned more about the science of learning, and added to my Excel, PowerPoint, and data visualization skills. All of this “professional learning” informs my work on various projects and helps improve my professional practice. 

To learn all of this, here’s what I did (along with a few example favorites):

But that’s not all. I also went hiking, rode my bike, ran road races, attended yoga classes, and cooked meals for myself and my family. Wait, what? Was there professional learning to be had from these activities? Let’s return to that notion in a bit…

Why is professional learning important?

Thinking of professional practice as professional learning positions us to think of everything we do as contributing to making us better at what we do. It’s mindset work. What do I mean by that? Mindset work is about attitudes and dispositions and understanding how principles guide our actions. It’s about how and what we learn from successes and failures, and about focusing efforts on incorporating what we learn into how we practice our craft.

What is professional learning?

As a young public school teacher, my professional learning (in those days we called it “in-service” or “staff development”) meant attending workshops on various topics, some directly related to what and who I was teaching, and others seemingly less so. Thankfully, my earliest experiences were positive and influential thanks to skilled presenters and compelling presentations. What I learned from them struck me as reasonable, relevant, and doable. In fact, some* resulted in career-long changes in my teaching practice and approach to students.  

Thus began a career-long fascination with professional learning. 

I once surveyed colleagues for a grad school project asking them to list any activities (including hobbies, sports, volunteer work, etc.) they felt impacted or informed their teaching practice. It was surprising when many of them identified activities not usually associated with professional learning – watching movies, scrapbooking, teaching swim lessons, cooking, and playing sports. They were making connections I wasn’t. They had figured out that the things they did for themselves and for others could also inform their work. 

You’re reading this because we share an interest in some of the same professional topics: learning and teaching, communication and presentations, evaluation, data visualization, survey research, and others. I’ve grappled with finding a thread that ties these seemingly disparate topics together. What I’ve landed on thus far is professional learning.

We read, we listen, and we learn to enhance, refine, or otherwise improve our professional practice, as is done in any field. We’re here because we are dedicated to improving our professional practice. But what if we also considered professional practice itself as a powerful form of professional learning? Let me show you what I mean and share why this is so important.

Evaluation as professional learning

Are you an evaluator? You’re engaging in professional learning all the time. After all, evaluation is conducted for the purpose of learning about programs or policies. As we collect data—from surveys, interviews, focus groups, site visits, observations, record reviews, etc.—we are in a constant state of learning that we then translate (through data analysis, of course) into findings, conclusions, and recommendations. Need info on evaluation? Check out my collection of resources.

Education as professional learning

Are you an educator? As teachers, we’re in a constant state of professional learning not only to keep up with educational innovations or research, but also as we learn each day from our students. Whether we teach kindergarten or college, we learn what our students are capable of, where they struggle to grasp concepts, where they can and can’t apply their understanding, and most importantly, we learn about their interests and special gifts—who they are as people. Effective educators analyze, synthesize, and use all of this learning in practice. And what about lesson planning? Here’s what I know from my ongoing work in classrooms supporting teachers, teaching graduate courses, and giving workshops: Whether I’m helping a science teacher teach combustion, a math teacher teach circumference and perimeter, or I’m getting ready for one of my survey design or audience engagement strategies workshops, I’m cracking open books, journals, or websites to relearn, refresh, or catch up on the latest research to ensure my teaching is thorough and up-to-date. That’s professional learning. In fact, check out the quote on my home page about the intersection of teaching and learning.

Presentations as professional learning

Have you ever given a presentation? Presentations have many purposes—to sell, to persuade, to inform, to educate, etc. —but what they all have in common is learning. As presenters, we work in service to the audience —our learners. Our goal is for them walk away with new learning about the topic. Every presentation is a lesson plan. Whether I’m giving a report to the Board of Education, sharing data with stakeholders, keynoting at a conference, or facilitating a workshop, I approach it the same way as I do a classroom session.  

Survey research as professional learning

Have you ever used a survey for research or to understand something about your colleagues or customers? That’s professional learning, too! From survey questions we learn about our respondents. We learn about their behaviors and attitudes. We learn how programs and policies are operating, how goods and services are being purchased and used, and how people feel about all of these. We use all of this learning for continuous improvement in our organizations, often communicating it to others (through presentations and education) so that they can improve programs, policies, and practices.

Everything is learning, and we are all learners. 

We pursue learning to enhance our professional practice doing the expected, the usual – reading books, blogs, and journal articles, engaging in listserv discussions, or attending conferences. We learn from both mistakes and successes. To form a deeper understanding of what facilitates success and failure, think of professional practice as learning – the acquisition of experiential knowledge arising from the daily scenarios, vignettes, and case studies that comprise our work.

*Discipline with Dignity, for example, taught me to stay calm in the face of challenging behaviors, not to vilify students when they acted out, and to work collaboratively and privately with those who struggled in my classroom.

Many thanks to my friend Chelsea BaileyShea, of Compass Evaluation + Consulting, LLC, for her thoughtful and valuable feedback on an early draft of this article.

New Newsletter!

Sure, you can read this blog here and check back for updates every now and then, but why not just subscribe to my newsletter The Learning Curve? You’ll get a link to any new blogs right in your inbox, along with a bunch of other cool content on a variety of topics! Easy peasy. Click here. 

The Learning Curve logo

Hindsight is 20/20, even with surveys (Cross post with actionable data blog)

Yep, it’s another great co-post with the splendid Kim Firth Leonard, of the actionable data blog.

Almost everyone (probably everyone, actually) who has written a survey has discovered something they wish they had done differently after the survey had already launched, or closed, with data already in hand. This is one of the many ways in which surveys are just like any written work: the moment you’ve submitted it, you inevitably spot a typo, a missing word, or some other mistake, no matter how many editing rounds you undertook. Often it’s a small but important error: forgetting a bit of the instructions or an important but not obvious answer option. Sometimes it’s something you know you should have anticipated (e.g. jargon you could have easily avoided using), and sometimes it’s not (e.g. an interpretation issue that wasn’t caught in piloting – you DID pilot the survey, didn’t you?). (more…)

The CASM that Bridged a Chasm: When Cognitive Science Met Survey Methodology and Fell in Love! (cross post with actionable data blog)

When Kim Firth Leonard of the actionable data blog and I write together, we usually refer to each other with a superlative – fabulous, magnificent, wonderful, etc. (all totally accurate, of course) – but now, I’m even prouder to call her co-author! Yes, we are in the throes of writing a book on survey design!*

After a very successful presentation to a packed room at Evaluation 2014 in Denver, CO (if you were there, thanks!) we met with an editor at Sage Publications to pitch our idea and now we’re busy fleshing out chapters and excited to share bits with our readers along the way.

The foundation of our collaborative work lies here: “how evaluators ask a question can dramatically influence the answers they receive” (Schwarz & Oyserman, 2001, p. 128).  (more…)

Designing Effective Surveys Begins with the Questions BEFORE the Questions! (cross post with actionable data blog)

The art and science of asking questions is the source of all knowledge.      – Thomas Berger

Hey readers, Sheila here, writing once again with the marvelous Kim Firth Leonard, of the actionable data blog.

It’s survey design season, so get ready to flex those question design muscles! Well, to be truthful, it’s always survey design season in our data-saturated evidence-hungry society. As surveys have become ubiquitous, it is incumbent upon survey researchers to to cut through all the noise by developing the most effective instruments we can. And what’s the best way to get ready for any endeavor that requires flexing? A warm-up! Just as failure to warm up for physical activity can invite injury, diving into survey question design without a preparation process can introduce the possibility of gathering bad data. Remember the principle of GIGO(more…)

It’s All in How You Ask: The Nuances of Survey Question Design (cross post with actionable data blog)

If you do not ask the right questions, you do not get the right answers.


– Edward Hodnett, 20th century poet and writer

At Evaluation 2014, the American Evaluation Association’s annual conference, the incredible Kim Firth Leonard (find her over at actionable data blog) and I facilitated a 90-minute skill building workshop on survey question design. Kim and I have co-authored several posts on our shared passion for survey design. You can find these posts here, here, and here. We were thrilled to geek out with such a great group ready to humor us by taking our pop quiz, listening intently as we shared the science behind answering questions about behavior, nodding as we reviewed fundamental principles of survey design, and genuinely engaging with us in exploring better ways to approach surveys.  (more…)

When a Direct Question is NOT the Right Question

Who hasn’t answered the question, “What did you learn?” after attending a professional development session? As a PD facilitator and evaluator, I’ve certainly used feedback forms with this very question. After all, measuring participant learning is fundamental to PD evaluation.

In this post, I’ll share examples of actual data from PD evaluation in which we asked the direct question, “What did you learn?” I’ll then explain why this is a difficult question for PD participants to answer, resulting in unhelpful data. Next, I’ll offer a potential solution in the form of a different set of questions for PD evaluators to use in exploring the construct of participant learning. Finally, I’ll show where participant learning fits into the bigger picture of PD evaluation.  (more…)

Where have all the (qualitative) data gone?

As is evidenced in recent posts co-authored with fellow blogger Kim Firth Leonard of actionable data (read them here and here), I’m fascinated with surveys and survey research. Just last week another fellow blogger, Brian Hoessler, of Strong Roots Consulting offered this post on open-ended questions.

I shared with Brian that I recently saw a needs assessment instrument composed of all open-ended questions – maybe a dozen or so questions in all. I always wonder when I encounter surveys with open-ended questions whether the qualitative data collected is indeed systematically analyzed and not just scanned or read through, especially in the case of very brief responses to open-ended questions. If data are analyzed, I wonder what kinds of coding strategies evaluators use – inductive or deductive? Constant comparison? Grounded theory?  (more…)

A Roundup of Survey Design Resources (cross-post with actionable data)

Sheila here, writing with the magnificent Kim Firth Leonard of the actionable data blog.

Since agreeing that we would co-author a series of blog posts on surveys with a focus on composing good questions, we have discovered countless other blog posts, websites, journal articles, and books on survey research from a variety of fields and perspectives, many of which feature discussions of and advice on question construction. Of course, we have a few personal favorites and well dog-eared texts:  (more…)