IAN: from theory to pratice

An example of how the IAN model can be used as a tool for quality assessment in young audiences music productions
1. Summary

Concerts Norway (Rikskonsertene) has extensive experience in using the IAN model as an evaluative and qualitative assessment tool in all of their young audience music productions. Concerts Norway presents two live professional music concerts in schools per year for all students in Norway grade 1 to 10. In IAN: from theory to practice, Scott Rogers, Concert Director, Concerts Norway explains why the IAN model is important for Concerts Norway, their experiences with evaluating school concerts, how the model is used in practice and the possible strengths and weaknesses in the model. This talk was presented at the first YAMsession held in Brussels in 2013.

Do you want to know more about the IAN model? Click here

For a power point presentation on different examples of concerts defined by the IAN model, click here

2. Introduction

I have three simple questions which guide most of my work:

  1. What kinds of concerts should we produce?
  2. Are they important?
  3. How do we know?

Question number three puts the need for systematic evaluation at the very core of our work.

We have a very special situation which arises when we talk about the quality of outreach concerts for young audiences. We work with a unique type of concert where it is possible to have extremely good music, but a bad concert. This is because of the need for the music to connect with its audience - an audience who usually hasn’t requested the concert or purchased a ticket in advance. Added to this is the fact that a good musician is only perceived as “good” within a certain context. It is not difficult for even the finest musicians to wind up in a situation where they have no possibility to be at their best. And then, there is our audience. It is not difficult to make children laugh, shout out or clap their hands (and these responses don’t necessarily need to have anything at all to do with a musical experience!). Children’s music preferences are well researched and documented. It is relatively easy to create a concert that children “like”. But “liking” is only a small part of the extensive emotional spectrum upon which music can be experienced, and often based on music with which the children are already familiar. This being the case, how do we know when we have presented a high quality concert which has touched our audiences?

3. Background

I would like to use a little bit of time today to explain why the IAN model is important for Concerts Norway, our experiences with evaluating school concerts, and how we use the model in practice.

I should start by explaining a little about our organization and our need for a systematic evaluation of our creative work.

Concerts Norway runs a school concert organization, funded by the Department of Culture, which delivers two live concerts each year to every school child in Norway, 1st through 10th grade. We do this through a network of partners and producers who create, adapt and distribute the concerts throughout the country.

Even though we have a large network of national and regional producers, we want to achieve an overall artistic focus and a national standard of quality. To do this, we need quality control mechanisms that we can rely on, and that are trusted by our network.

4. Our history

Concerts Norway began their first attempts at systematic evaluation on a national scale during the late 1990’s. A rather large Program Council was established with the most experienced producers in the network participating. We quickly found out that the group was too large, and that it was absolutely essential that experienced musicians and teachers were also included in the evaluation process. Our experience with the IAN model has taught us that the size and composition of the group using the model relates directly to the quality and validity of the result. The current Program Council consists of 1 national producer, 2 regional producers, 1 musician, 1 teacher and a moderator who leads the evaluation process.

Each new school concert production is documented on video during its first touring period. This (and other) documentation is then sent in to the national program council for evaluation. During the course of monthly meetings, each new program is evaluated and the responsible producer given the results of the evaluation. Programs fall into one of two categories: recommended for further use in the school concerts system, or not recommended - need for a second documentation after adjustments to the program. Programs which are not recommended can be evaluated a second time. After two not recommended evaluations, the program should be removed from the school concert system.

From the very start, the results of this systematic evaluation were almost immediately noticeable. Simply the act of raising an organized discussion on the question of quality (What is it? Who defines it? How do we create and measure it?) enriched the entire production system. The schools noticed a higher level of quality and a larger measure of stability and consistency in the concerts they received.

This was, however, not without its problems: the quality of documentation varied widely, and discussions were easily dominated by those with longest experience, highest level of education or simply the biggest mouth, and the consistency of the evaluations varied widely. At the end of the day, flawed evaluation seemed to be better than no evaluation at all.

But we needed to be better. What we needed was a more consistent and methodical way of evaluation. One that leveled the playing ground for all participants, and gave us better ability to give evaluations to producers that positively influenced the way they worked.

5. Using the IAN model

Our answer to these challenges was the IAN model. The model sets a framework for the evaluation dialogue which allows all participants to contribute equally. This happens because all participants in the evaluation begin their discussion by talking about the models three vectors: Intent, Ability and Necessity. These values help create a discussion where all viewpoints have validity, not just that of experts. Having the three vectors define a “room for dialogue” creates a controllable dialogue which can be held on target by a moderator, so that the group can hold its focus on obtaining a result, and not get bogged down in the process or become distracted.

Over the course of several years, we have refined our methodology and found that the model in its purest form is perhaps the most functional. We have gone from uncontrolled discussions and “check-list” evaluation, to deep discussions on the many factors that influence our experience of quality in a concert program.

6. Defining the vectors

The IAN model is often misunderstood as a mere definition of its three vectors: Intent, Ability and Necessity. While we can learn much from a discussion of the vectors, they by themselves do not constitute an evaluation, nor do they release the full potential of the model.

It is important to discuss the three vectors, and to agree on their relative strengths and weaknesses in order to better define what the concert IS. Most evaluations make the mistake of focusing on what the concert IS NOT, in their attempt to define quality (or the absence of quality). This is not a strong point of departure for making something better, which should be the whole point of our work with standards of quality. Arriving at a common understanding of the three vectors is a necessary precondition to being able to talk about the production on its own terms. A usable evaluation takes its perspective from the strengths of the program, and not primarily from its weaknesses.

We interpret “Intent” as encompassing both artistic intent and the intention (or the will) to communicate to (or “reach”) an audience.

We interpret “Ability” as encompassing both musical/technical ability (in relation to specific musical genre and tradition), and the ability to communicate effectively with an audience.

We interpret “Necessity” as concerning primarily our ability to facilitate the musical experience in a meaningful way to children, and not as an instrumental “Necessity” as concerning the application of the concert to the school curriculum. First and foremost, we talk about creating a meeting between music, musicians and children in which the children have a musical experience they might otherwise not have had access to.

7. Discussing the production as a whole – it’s a dialogue, not a debate

It is only after we have defined (and agreed upon) what it is that we are talking about, that we can rationally discuss the program as a whole. However, the way we talk about it influences the way we think about it. For this reason, it is important that we use dialogue in our use of IAN, and not debate.

Dialogue vs. Debate


This aspect creates an essential role for the moderator. It is important that the moderator does not participate in the discussion, but instead, guide it with well-chosen questions:

  • What is best about this program? Where does it function well? How? Why?
  • How does the program function artistically and communicatively as a whole?
  • How does the program function over time? Where is the overall form most functional?
  • How is the concerts’ dramatic form built up?
  • If the content of the program is familiar to the children: Is the music so available that it can exist without extra help?
  • If the content of the program is unfamiliar to the children: What needs to be done in order to “make contact” between the music and the children’s perception of the music?
  • Which devices are used to facilitate the music? Do these devices function effectively? Are they too little? Are they too much?
  • Where do these devices strengthen the program by giving the children better access to the musical experience? Where do these devices detract from the musical experience by taking on a disproportional interest or value of their own?

All participants in the evaluation Dialogue must themselves be held to a certain level of expectation. Statements cannot be simply taken as fact, but must be defined and defended. Broad generalities should not be allowed, only specific, detailed feedback. Strengths in the program should be explained and exemplified. Specific strategies should be suggested when weaknesses are identified. It is the moderator’s job to challenge all unsubstantiated statements.

P: I thought this was a really good concert.

M: Why?

P: Well, the musicians played very well.

M: What particularly impressed you about their performance, and what context defines “good playing” in this particular musical style?

P: They had a great deal of technique and played well stylistically.

M: How important is technique and style in this particular concert situation?

P: I’m sure the children can hear and appreciate this kind of virtuosity, but I am unsure if they have enough experience with musical style for it to make an impression on them .

M: If we try to look at the concert from the children’s perspective, what would seem to be most accessible or most important to them?

The moderator has a crucial role. He should strive to interrupt

the conversation as little as possible, but quickly step in when it

loses focus or becomes off-topic. The moderator should use questions to

clarify basic information or viewpoints, build upon important themes,

and refocus the discussion as needed. In this aspect, the methodology

for the discussion of the program as a whole uses a type of “question

driven” Socratic Dialogue – a methodology within a methodology.

8. Suggestions for improvement

The primary goal of our concert evaluation is not to judge the programs, but to give the producers and musicians valuable information which they can use to improve and refine their programs. Some of our very best concert programs over the past few years, have been programs which have not been recommended at their first evaluation, but where the evaluation has created a clear and positive direction for the further development of the concert. It is important that the reflection and analysis that goes on during the evaluation dialogue results in comments that the producer can work with. Suggestions for improvement should include three elements:

  1. A statement describing an opportunity to strengthen the program
  2. An explanation of why or how this occurs in the program
  3. A suggestion for strengthening the program
  4. What effect the suggestion might have
  5. By doing what?

Example: 1. The audience behavior suggests that they lose interest 2/3 of the way into the program. 2. Twenty minutes is a predictable low-point in the concentration curve of elementary school students. 3. Create a dramatic or musical break in the program after about 20 minutes. 4. This will strengthen the overall concert form 3. By creating variation, “zeroing out” or “rebooting” the program and creating a fresh start for the beginning of the final sequence.

9. Dialogue with the producer and the performers

After the evaluation meeting, we create a summary of each program which includes:

  • The definition of IAN’s vectors
  • The discussion of the program as a whole
  • Suggestions for improvement
  • The final assessment (A/IA)

(Evaluation of the programs’ information and pedagogical materials, since it is not an active part of the performance, is evaluated by our Information Section.)

This summary is then sent out to the members of the Program Council so that they can check that it represents their viewpoints. The summary is then edited after this final check. The finished summary is sent to the producer of the program, who continues the dialogue directly with the musicians. For programs which are not recommended for further use, the written summary is also often followed up by a conversation between the producer and the moderator of the Program Council to elaborate on certain points and discuss possible adjustments. In certain cases, where the video documentation leaves doubt as to the artistic quality of the program or the audience response, the moderator and/or members of the council will visit the program during a live-performance.

10. What IAN doesn't tell us about quality?

IAN is a tool to talk about artistic and performative qualities. However, there are lots of other valid viewpoints and success criteria connected to school concerts outside of the purely performative. Children, teachers, school administrators, musicians, producers, politicians and others, all have their own perspectives, needs, and definitions of what quality they expect from a school concert. IAN gives us just one part of a larger picture, and other tools must be used (questionnaires, focus groups, interviews, etc.) to evaluate these other areas so that we can make our artistic decisions based on a total understanding of the musical experiences we produce.

Leave a Comment