5

Andreas Håkansson

Food and Meal Science, School of Education and Environment, Kristianstad University

Abstract  – The objectives of the study were, first, to assess the progression in quantitative thinking between first-, second-, and third-year students enrolled in the Bachelor in Culinary Arts and Food Science program and, second, to use the insight to suggest necessary changes to the curriculum to ensure better progression.

A standardized quantitative test was used to measure progression in functional knowledge divided in four categories: (i) calculation skills and basic mathematics, (ii) application and interpretation of descriptive statistics, (iii) interpretation of analytical statistics, and (iv) communication through graphs and charts.

The results show significant progression in skills and confidence across study years. Broken down into the four categories, progression is stronger in statistics (categories ii–iii). The most problematic category is basic calculations, where the average score is low and progress across study years is poor.

A suggestion is presented for how to reform the curriculum in order to improve progression in basic calculation by emulating the methods used for teaching statistics, where progression is better.

Keywords: Learning progression; functional knowledge; quantitative testing

1. Introduction

1.1 Quality and progression

Methods of assessing quality in higher education is a field that has attracted much attention in Sweden with the launching of the standardized quality assessment by the Swedish Higher Education Authority in 2011 (Hjort & Sundkvist, 2010). The current system focuses on measuring alignment between acquired knowledge and intended learning outcomes through analyzing student theses (Hjort & Sundkvist, 2010). However, education quality has many other dimensions. Ensuring progress in the skills and knowledge of students throughout the curriculum is one such important dimension. The 2011–2014 Swedish assessment system has been criticized for not putting enough emphasis on this factor (Buhre, 2014).

Different methods for assessing the progression of functional knowledge through a curriculum have been proposed and utilized. Within medical education the use of quantitative progression tests has been widely adopted (Arnold & Willoughby, 1990; Muijtjens et al., 2008; Schuwirth et al., 2010). These tests consist of a large number of standardized problems drawn from a problem bank (often of the “true or false” or “best alternative” type) and are administered to students annually or quarterly (Arnold & Willoughby, 1990). Knowledge progression and retention are measured by comparing test scores over time. More detailed insights into how knowledge and skills develop in different subfields are often acquired by grouping together questions in categories, in medical education, e.g., anatomy, biochemistry, surgery, etc. (Muijtjens et al., 2008).

Although widely used, this so-called value-added measurement technique has been criticized. Warren (1984), in an influential paper, pointed out two problems with the technique. First, quantitative comparisons of functional knowledge before and after a course can lead to the trivial conclusion that students have a better understanding of the course material after the course. The second problem is linked to the first: The intention of these tests is to measure the efficiency and quality of the teaching on functional learning. In order to achieve this, what should be tested before the course is not how well the student already masters the course material but to what extent the student has mastered the prerequisites. Thus, Warren (1984) argued that these quantitative value-added tests are inefficient at measuring true quality.

Despite their disadvantages, quantitative value-added methods constitute a widely used technique for assessing progress and retention of fundamental knowledge (Muijtjens et al., 2008; Schuwirth et al., 2010) since reliable, fair, and cost-effective methods are scarce. A recent literature study by the OECD (Kim & Lalancette, 2013) concluded that value-added progress testing is a valuable technique that, when handled correctly, gives fair and reliable results.

An alternative case-based method for assessing progression throughout the curriculum has been proposed and applied to engineering students by Wahlgren and Ahlberg (2013). This qualitative method focuses on how groups of students solve work-life-like cases. Wahlgren and Ahlberg’s case-based method gives a different type of insight from the quantitative value-added tests; it focuses more on the work process and less on fundamental knowledge. Qualitative progression tests are therefore to be considered complements to, rather than replacements for, quantitative methods.

1.2 Culinary arts and food sciences

This study addresses the progression of functional knowledge within the Bachelor in Culinary Arts and Food Sciences program at Kristianstad University. A brief background to the program is needed in order to highlight the particular challenges:

The study program was started in 2003 as a multidisciplinary education in food and meal science. During three years of intense courses, the field is studied from both artisan and theoretical perspectives. The theoretical aspects can be further subdivided in three themes: health and nutrition, food science, and meal culture and communication. Students encounter many different scientific traditions, methods, and tools throughout the curriculum; food science and nutrition are traditionally based in the natural sciences and medicine whereas meal culture and communication have stronger ties to social sciences or esthetic studies. The multidisciplinary approach has the potential to train highly skilled graduates that are able to combine and move between different fields to find novel solutions. However, since it requires students to become proficient in several different methodologies and ways of thinking, it provides a challenge from the perspective of continuous progression throughout the curriculum: There is a risk of students forgetting much of the material between courses on different traditions or each methodology being given insufficient time for the student to identify key concepts and obtain familiarity. Systematically testing to what extent students progress in the different fields and to what extent they retain skills and knowledge is thus especially important in a program like this.

1.3 Objective and limitations

This study focuses on assessing and developing the progression plan of functional knowledge in quantitative methodology in the Bachelor in Culinary Arts and Food Sciences program.

Functional knowledge in quantitative methodology is defined as the mathematics and statistics skills necessary for a successful professional career in the field. This practical definition is chosen in order to comply with Kristianstad University’s objective of educating Sweden’s most employable students (HKR, 2009).

Due to the quantitative nature of the knowledge field under investigation, a value-added-like method using a standardized test administered to students across the curriculum was used in this study. Previous studies have also pointed to the importance of using diagnostic tests to understand and design follow-up of mathematical literacy and as an aid to further develop teaching (Engineering Council, 2000; Gibbs, 1999). Thus, the specific aim of this study was twofold; first, to diagnose the current progression in quantitative tools through the program, and second, to use this data to make suggestions for an evolved curriculum to strengthen progression in quantitative thinking.

1.4 Categories of quantitative thinking

Four categories of quantitative thinking were derived from informal feedback from two representatives of prospective employers and from studying the curriculum and intended learning outcomes for courses in the later part of the curriculum. Each category describes a field and a minimal level of skills and functional knowledge required in order for graduates to carry out relevant work tasks after graduation:

  • (i) Basic mathematical ability to perform calculations and solve problems using mathematics
  • (ii) Ability to use descriptive statistics for communication and interpretation
  • (iii) Ability to interpret analytical statistics
  • (iv) Scientific literacy in interpreting and communicating through graphs and tables

2. Materials and methods

2.1 Problems for the diagnostic test

A diagnostic test was designed to test the skills of students in the four categories. The test contained eight problems (two from each category), three questions on confidence in applying mathematics to everyday and professional situations, and demographic variables. Five of the eight problems were drawn from recent PISA databases (The Swedish National Agency for Education, 2010; 2013) and one question from the national secondary school test in Mathematics 2B (EDUSCI, 2012).

An advantage of relying on PISA questions is that these questions have already been tested and validated. An apparent disadvantage is that these questions are designed for significantly younger students (PISA is used for assessing skills of 15-year-old pupils before they enter secondary school). It is important to ensure that the selected questions test the right skill level; however, comparisons of the requirements in the categories to the relevant questions show that the basic minimal skills are to a large extent covered by the PISA questions. Questions drawn from outside the PISA bank were used to assess knowledge in statistics beyond PISA level, namely for categories (ii) and (iii).

Since proficiency in quantitative thinking is defined in relation to professional application, problems relating to food or health applications were selected when available. The topic, application area, and level of the eight problems are summarized in Table 1.

Table 1. Summary of knowledge questions.

hakansson_tabell1

1 Where 5 is the most difficult. (See The Swedish National Agency for Education 2010, 2013).
2 Each question in the test gives one or more E-, C-, and/or A-level points. E-level points correspond to what is required for the lowest passing grade.

Students were also asked to rate their confidence in (1) using mathematics to solve a practical problem in calculating the number of tiles needed when building a new kitchen floor, (2) interpreting charts and tables in a scientific text, and (3) performing the necessary calculations to report standard experiments in microbiology for food analysis.

2.2 Test administration

The test was administered to a majority of first-year (n = 24), second-year (n = 22), and third-year (n = 16) students following the program. All tests were proctored and were completed anonymously during normal class times. Respondents were asked to report age, gender, and secondary school basic mathematics grade (“Matematik A” or similar).

2.3 Statistical software

Regression analyses were carried out using MATLAB r2013b (MathWorks, Natick, MA).

3. Results and discussion

3.1 Proficiency as measured by the problems

The total test score increased from first- to second- to third-year students. The average score for the different groups can be seen in Figure 1. The figure also displays scores divided into the four categories of different types of quantitative skills.

hakansson_figur 1

Figure 1. Average results for the different categories. Grouping by study year: 1 (white), 2 (gray), and 3 (black). 

The highest average scores are obtained for graphs and tables (iv). This is true for almost all students; in total only three students did not manage to solve either of the two problems in this category, and two of them were first-year students.

Students scored low in both descriptive (ii) and analytical (iii) statistics. For descriptive statistics, this is largely influenced by the low percentage of correct answers on question 3 (see Table 1). Only four students solved this problem; thus, most students have not mastered combining mean values, standard deviations, and distributions to describe percentages in different ranges. Despite the low levels, a progress through study years (SY) can be seen in the results. For descriptive statistics, the increase is between SY1 and SY2. This is reasonable because descriptive statistics is introduced near the end of SY1 (after the point at which the test was administered). Analytical statistics is treated in SY3, and a corresponding increase in score can be seen from SY2 to SY3 in the results.

For calculations (i), the results presented in Figure 1 do not point to any significant progress.

3.2 Comparison to PISA results

For the four PISA questions, ample comparison data is available; see Table 2. This makes it possible to investigate to what extent the students have progressed not only between study years but also compared to the average 15-year-old student. By comparing the increase in the number of students solving a problem between the national 15-year-old sample and our first-year students with the increase between first- and third-year students, the results in Figure 1 can be seen in perspective.

Question 8 measures calculation skills (equation solving). The progress from the average 15-year-old to SY1 is 19 percentage points and from SY1 to SY3 a further 5 percentage points. Together with the low percentage of students able to solve the problem, this confirms the view that calculation skills do not progress sufficiently within the program. This can be compared to question 2 from the category graphs and charts (ii), where students on average increased 18 percentage points between SY1 and SY3 while the increase from the average 15-year-old to SY1 was only 11 percentage points. Thus, graphs and charts (iv) shows a more significant increase than calculations (i) within the program.

It should be noted that these results are only indicative of the relative progress and proficiency in each category. The absolute numbers should be interpreted with care. The students that took the test as 15-year-olds are not the same students as in the study program, and the 2010–2013 PISA results may thus not be completely representative of the skill sets of our students when they were that age. Furthermore, we cannot evaluate to what extent our students constitute a representative sample comparable to the PISA sample.

Table 2. Percentage of respondents giving the correct answer.
table1_assessing

1 SY: Study year

2 PISA SE: the percentage of Swedish respondents solving the problems

3 PISA OECD: the OECD average

4 Data source for the two rightmost columns: The Swedish National Agency for Education (2010; 2013)

3.3 How do skills increase across study year?

The effects of study year indicated in Figure 1 and Table 2 do not take into account the influence of other variables. A least-square linear regression analysis was performed on the total results in each category as the response variable and study year, gender, self-reported secondary school mathematics grade, and age as independent variables. Gender was coded using a dummy variable (1 woman, 0 man), and grades were coded as follows: MVG: 20, VG: 15, G: 10 (secondary school grades were divided in three steps with MVG representing the highest grade and G the lowest passing grade). The resulting coefficients for the different factors together with significance levels can be seen in Table 3.

Table 3. Linear regression analysis on total test score and result per test category. 
table2_assesing
Notes. Significance levels: *p<0.10, **p<0.05, ***p<0.01. Cross terms, when included, were insignificant (p>0.05). NS: not significantly different from zero effect.There is a significant (p < 0.05) effect of study year on the total test score. Thus, students improve across years even when controlling for gender, grade, and age. Furthermore, secondary school grade continues to have a strong and highly significant effect on total score.Looking at the different categories, two different groups emerge. Calculations (i) and graphs and charts (iv) are unaffected by SY and are mainly determined by the secondary school mathematics grade whereas the statistics categories, (ii) and (iii), are mainly determined by SY. For graphs and charts (iv), the lack of significant progress is not as troubling since results in this category are already high; see Figure 1.Age does not have a significant effect on total or category score.3.4 How does confidence increase across study year?Skills are important but need to be combined with confidence to ensure that the graduates will actually use their knowledge in the workplace. A least-square linear regression analysis was performed to investigate the effect of the independent variables on the confidence level of students. Confidence was measured as the sum of the three individual confidence questions. The resulting coefficients can be seen in Table 4. Study year has a significant and positive influence on confidence, as does the self-reported grade.It is interesting to note that there is a large and significant effect of gender on confidence; female students have lower confidence in applying quantitative thinking. This effect on confidence is much stronger than on skills, indicating that female students underestimate their ability in applying quantitative methodology. This effect is well known in literature (e.g., Jones & Smart, 1995); however, it is an important effect in light of the large percentage of female students in the program under investigation (71% in the survey).
Table 4. Linear regression analysis on respondent’s confidence in solving problems using quantitative methods. 

hakansson_tabell4

Note. Significance levels: *p<0.10, **p<0.05, ***p<0.01.

3.5 Differences in grades across study year

In summary, the regression analyses in Tables 3–4 show a significant effect of study year but an even stronger effect of grade from secondary school. Since grade is an important factor, it deserves careful investigation.

The Bachelor in Culinary Arts and Food Sciences program has seen an increase in number of applicants and, consequently, in the minimum admission grade point average over the last couple of years (UHR, 2014); see Figure 2. The first-year students in the spring semester of 2014 thus had much higher grade averages than the third-year students at the same time. This was not reflected in the self-reported mathematics grades of the survey, which showed an almost constant level, as seen in Figure 2. This implies that either mathematics grades are not increasing as fast as grade averages among our applicants, or the self-assessed grade from the fundamental course in mathematics (i.e., “Matematik A” or equivalent) is not representative of the grade in mathematics. If prior knowledge in mathematics skills of the students has increased, as suggested by the minimum admission grade point average rather than by the self-reported grade, this would influence the results. If there is a larger difference in grades between SY1 and SY3 than is reflected in the material, this implies that the effect of study year is underestimated in Tables 3–4. If SY3 students outperform SY1 students despite significantly lower admission grade point average, progression within the program is stronger than indicated by the survey.

hakansson_figur2

Figure 2. Self-reported grade in fundamental mathematics from secondary school (●) and minimum admission grade point average for the bachelor program (◊) from UHR (2014). Error bars denote standard error of means.
Table 5. Curriculum for the Bachelor in Culinary Arts and Food Sciences program as of 2014. 
assessing_table5

The bracketed course codes (format XXYYYZ) can be used to access the detailed course syllabus online using the format: http://www.hkr.se/en/education/course-page/?cCode=XXYYYZ.
1.5 hp corresponds to one week of full-time studies.

4. Suggestions to improve the curriculum

The current curriculum can be seen in Table 5. Comparisons of progress across study year in different categories, together with previous research, can be used to suggest changes to the curriculum to improve progress. The test results show that proficiency in descriptive statistics (ii) improves between years. This theme is currently integrated in first-year food science courses as applied lectures and laboratory exercises (especially in NL201G and LB201G; see Table 5). In comparison, basic calculation (i) is handled ad hoc in various courses. This could explain why category (i) shows poor progression compared to category (ii). Thus, it is suggested that mathematical thinking and problem solving be explicitly integrated into the first-year courses where, similarly to how descriptive statistics is being taught, it has a clear, practical application, especially in the first-semester course MM114G. Diagnostic testing before, during, and after these teaching activities is recommended to correct misconceptions and redirect teacher time to students with poor prior skills in mathematics (Garfield, 1995; Engineering Council, 2000).

Descriptive statistics is also followed up throughout the second year through statistic evaluation of sensory data (SE301G) and consumer surveys (TP100G), in the third year when teaching research methodology (ME103G), and in the bachelor thesis of many students. Basic calculation has not been given the same emphasis in the curriculum. It is suggested that basic calculation receive more teacher time and effort in LB201G, EK103G, and TP100G to ensure that the topic is covered throughout the first two years of the program.

4.1 Confidence and gender

Jones and Smart (1995) suggest two solutions to the confidence gender gap that could be applied to university-level education: first, increase the use of technology in teaching mathematics, and second, involve female students in studying and discussing gender differences in confidence in applying quantitative methodology. Thus, it is suggested that the gender confidence gap found in this study be discussed with the students. The most natural time in the curriculum that this can be addressed is in MM114G when introducing calculations.

5. Methodological limitations

5.1 The quantitative value-added technique

As mentioned in the introduction section, the quantitative progression assessment technique has been used extensively because it has the potential to give a fair and relevant measure of the progression of functional knowledge (Kim & Lalancette, 2013). However, it has also been criticized for often not living up to this potential. Warren’s (1984) first objection is that the method risks giving the trivial conclusion that students have mastered the course material better after the course. This is true to some extent for conclusions regarding statistics categories (ii) and (iii). Some of the students have not encountered much statistics in primary and secondary school. The increases in these categories in both years two and three could thus be expected.

Warren’s (1984) second objection, that the method is unable to measure real progression since the test tends to measure intended learning outcomes before the course starts instead of more relevantly testing prior knowledge, is less applicable to this study. Only a small number of questions are so specific that they relate to intended learning outcomes of the courses or specific prerequisites. The use of PISA questions, i.e., those based on material all students have already covered before leaving primary or secondary school but perceived as difficult, allows us to measure whether there is progress in functional knowledge in material that is not new to the student but is rather old knowledge that is fortified and strengthened.

5.2 Selection of respondents

The survey was carried out in connection to campus lectures. There is a risk that this is not a representative sample of all students registered in the program since some students might avoid noncompulsory activities. Thus, the results from the survey describe the progress of students actively participating in the teaching activities.

5.3 Validation and test design

The knowledge questions in the test were chosen based on a teacher perspective of what students are expected to master to complete courses and, ultimately, succeed professionally in combination with input from prospective employers of the graduated students. These questions should ideally be validated in a larger study of alumni in order to find out what skills graduates do actually use and what skills their employers require from them.

The number of questions in the present study is low compared to the large-scale value-added tests in medical education. Muijtjens et al. (2008) used 200 questions; this study used eight. Increasing the number of questions can reduce the risk of obtaining unrepresentative results from single poorly formulated questions or random errors due to miscalculations that are not directly related to understanding and work-life application of knowledge. However, increasing the number of questions on a test that is noncompulsory and that does not influence the grade can also decrease the motivation of the student taking the test. Poor motivation during the test has been suggested as an explanation for the low PISA results of Swedish 15-year-olds (Örstadius, 2014).

Further validation could be obtained by triangulation with an alternative method. The qualitative case-based methodology of Wahlgren and Ahlberg (2013) is an interesting candidate for such a follow-up study.

5.4 Curriculum development and follow-up

The suggestions for the evolved curriculum are based on lessons learned and comparisons between categories; however, there is no guarantee that the suggestions will have an effect on the students. Follow-up studies need to be performed to assess the effect of the suggested changes.

6. Summary

The aim of this study was to diagnose the progression in quantitative thinking in the Bachelor of Culinary Arts and Food Sciences program at Kristianstad University and to use the insights to evolve the curriculum. A test was designed based on four categories of quantitative skills needed by the students to succeed professionally in the field.

Students are currently progressing in two of the fields (descriptive and analytical statistics), and in a third field (interpreting graphs and tables), competence is already high. Basic computation and mathematics skills is not progressing sufficiently and needs to be developed.

By applying insight from how descriptive statistics is taught within the program, it is suggested that basic calculation be introduced as a specific teaching activity early on but clearly linked to practical food application. Suggestions on where in the curriculum basic calculation skills can be fortified throughout the first two years are also given to ensure the topic is kept active in the minds of the students.

Acknowledgments

The author wishes to express his gratitude to the participants in the study and the teaching staff in the program for valuable discussions and suggestions. Program Director Bitte-Müller Hansen is specially acknowledged for helping with the administration of tests for third-year students.

7. References

Arnold, L., & Willoughby, T.L. (1990). The Quarterly Profile Examination. Academic Medicine, 65(8), pp. 515-516.

Buhre, F. (2014). En granskning av en granskning: Universitetskanslersämbetets kunskapssyn. [An Assessment of the Assessment: Approaches to Knowledge in the Swedish Higher Education Authority], Högre utbildning, 4(1), pp. 5-17.

EDUSCI (2012). Nationellt prov Matematik 2b, vt 2012. Umeå: Institutionen för tillämpad utbildningsvetenskap, Umeå Universitet.

Engineering Council (2000). Measuring the Mathematics Problem. London: The Engineering Council. http://www.engc.org.uk/engcdocuments/internet/Website/Measuring%20the%20Mathematic%20Problems.pdf

Garfield, J. (1995). How Students Learn Statistics. International Statistical Review, 63(1), pp. 25-34.

Gibbs, G. (1999). Using Assessment Strategically to Challenge the Way Students Learn. In: Brown, S., & Glasner, A. (Eds.) Assessment Matters in Higher Education: Choosing and Using Diverse Approaches. Buckingham: S.R.H.E. and Open University Press.

Hjort, M., & Sundkvist, M. (2010). Högskoleverkets system för kvalitetsutvärdering 2011-2014. [The Quality Evaluation System of the Swedish National Agency for Higher Education]. Stockholm: Swedish National Agency for Higher Education. http://www.hsv.se/download/18.4afd653a12cabe7775880003715/1022R-system-kvalitetsutv.pdf

HKR (2009). Strategi 2009-2014, Strategiska utmaningar i ett nytt högskolelandskap. [Strategy 2009-2014, Strategic Challenges in a New Landscape for Higher Education] Kristianstad: Kristianstad University. http://www.hkr.se

Jones, L., & Smart, L. (1995). Confidence and Mathematics: A Gender Issue? Gender & Education, 17(2), pp. 157-166.

Kim, H., & Lalancette, D. (2013). Literature Review on the Value-Added Measurement in Higher Education. Berlin: OECD. http://www.oecd.org/edu/skills-beyond-school/Litterature%20Review%20VAM.pdf

Muijtjens, A.M.M., Schuwirth, L.W.T., Cohen-Schotanus, J., & van der Vleuten, C.P.M. (2008). Differences in Knowledge Development Exposed by Multi-Curricular Progress Test Data. Advances in Health Sciences Education, 13(5), pp. 593-605.

Örstadius, K. (2014). Därför kan PISA-testet vara missvidande [Resaons why the PISE test might be misleading]. Dagens Nyheter June 4.

Parsons, S., Croft, T., & Harrison, M. (2009). Does Students’ Confidence in Their Ability in Mathematics Matter? Teaching Mathematics and Its Applications, 28, pp. 53-68.

Schuwirth, L.W.T., & van der Vleuten, C.P.M. (2012). The Use of Progress Testing. Perspectives on Medical Education, 1(1), pp. 24-30.

The Swedish National Agency for Education (2010). Rustad att möta framtiden? [Well equipped for facing the future?]. Stockholm: The Swedish National Agency for Education, report 2010:352.

The Swedish National Agency for Education (2013). PISA 2012. Stockholm: The Swedish National Agency for Education, report 2013:398.

UHR (2014). Antagningsstatistik [Statistical data on admission]. Stockholm: The Swedish Council for Higher Education. http://statistik.uhr.se/

Wahlgren, M., & Ahlberg, A. (2013). Monitoring and Stimulating Development of Integrated Professional Skills in University Study Programs. European Journal of Higher Education, 3(1), pp. 62-73.

Warren, J. (1984). The Blind Value of Value Added. American Association for Higher Education & Accreditation Bulletin, 37(1), pp. 10-13.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Lärarlärdom, 2014 Copyright © 2014 by Biblioteket, Blekinge Tekniska Högskola is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book