Print PDFPrevious PageTable Of ContentsNext Page

Surveying university student standards in economics

Peter Abelson*
Macquarie University 2005

Abstract

Responding to concerns about declining student quality in economics courses, in late 2003 and early 2004 the Economic Society of Australia surveyed the Heads of Economics Departments in Australia to determine their views on three main issues: student standards, major factors affecting these standards, and policy implications. This paper describes the main results of the survey, reviews the conduct and value of this kind of survey, and discusses policy implications for economics in universities. Strong processes assuring anonymity to respondents minimised strategic responses, but may not have eliminated them entirely. Most respondents considered that student standards have declined and that the main causes include lower entry standards, high student-staff ratios, and a declining culture of study. However some respondents argued that standards are multi-dimensional and that people may properly attach different weights to different attributes. Importantly, these views are based largely on experience rather than evidence and a major finding of this paper is the need for more evidence on standards and on the factors that influence them. Most respondents favour a decentralised university-based approach to dealing with these issues, contending that centralised accreditation is inappropriate and that market forces would promote quality issues. In the writer’s view, externally set and assessed exams as part of university examination procedures would lift standards and send out improved market signals.

Keywords: Universities, Educational standards.

JEL Code: A20

(*) Macquarie University. The author is also Honorary Secretary of the Economic Society of Australia. In this capacity I prepared the report ‘A Survey of Student Standards in Australian Universities in 2003’ published in 2004 by the Economic Society of Australia (see www.ecosoc.org.au). The views expressed in this paper are my personal views.

Over the last decade, following for example Abelson (1996) and Lewis and Norris (1997), the economics profession has been exercised considerably with declining enrolments in economic majors, the causes and consequences. Nearly all analyses, including these two papers, have pointed to competition by other commerce-based subjects, most notably business studies, as the major cause of declining numbers in economics majors (for example see also Bloch and Strombeck, 2002). Two major consequences of the expansion in related commercial subjects have been a large increase in students in economics service courses and a perceived lowering of standards (Alauddin and Tisdell, 2000; Millmow, 2002).

The economics profession has been divided as to how best to deal with these changes, in particular divided about a teaching strategy. The traditional approach, embodied in a major survey of professorial opinion described in Anderson and Bland (1992), was that economics departments should provide a ‘rigorous classical economics education to students’. Opinion has shifted a little since then and there is a more widespread view that this approach needs to be softened with a more utilitarian strategy and more real-world applications (Bloch and Strombeck, 2002; Guest and Duhs, 2002). On the other hand some economists, such as Millmow (2002), argue more radically that greater application of heterodox economics would retain the rigorous approach of traditional economics and be more accessible and realistic while avoiding the typical dumbing down of economics in business study courses.

The central topic of this paper is closely related to these issues: it is the question of student standards in economics. Drawing on the survey by the Economic Society of Australia (2004), the paper is concerned with trends in student standards in economics, their causes, and possible policy responses. But the paper goes beyond the findings of the survey to discuss the strengths and weaknesses of such surveys in estimating student standards. And it goes beyond the standard discussion of teaching strategies to discuss a wider range of policy options for student standards.

In the last 15 years, there has been an extraordinary increase in the total number of students in Australian universities. Between 1990 and 2003, the number nearly doubled from 485,000 to 930,000. Between 2000 and 2003, the number rose by some 30 per cent from 695,000 to 930,000. Nearly 200,000 of the current students are from overseas. On the other hand, total effective full time staff increased by little over 10 per cent between 1995 and 2003 (from 80,754 to 89,370). University wide, the student-staff ratio rose from 15.3 in 1995 to 21.4 in 2003.1

In these circumstances it would seem inevitable that average standards would fall. It is true that average standards could fall and that all students could be as well or better educated than without the increase in enrolments. Thus universities could still be adding value generally. However, there is widespread concern within and without universities that the expansion in university enrolments has been at the expense of a general decline in the quality in university education. In 2001, after a lengthy inquiry and numerous submissions, a Senate Committee found ‘strong evidence to demonstrate that many subject disciplines in many universities had experienced declining standards in recent years’.2 Many other countries have had similar experiences.

Commerce faculties are often under special pressure because they are attractive to students and they may generate surpluses to support other parts of universities. Within economics departments, as noted above, there is concern that standards may have fallen in order to maintain enrolments or to match apparently easier and less rigorous subjects. Notwithstanding this, some economics departments believe that they have been able to maintain and even to raise student standards despite these pressures.

With this background, in September 2003 the Central Council of the Economic Society of Australia (ESA) resolved to conduct a survey into student standards in economics courses in universities. The survey had three main aims. These were to determine:

● the standards of work achieved by students of economics in Australian universities;

● the main factors that influence these standards; and

● policies for maintaining or improving standards in economics.

The Society sent a questionnaire to Heads of Economics Departments at the 29 Australian universities which run economics courses, most of which also provide economics degrees. Twenty-one heads of departments or their representatives responded.3 The 21 responding departments included 6 of the 8 major metropolitan universities4, 10 responses from other metropolitan universities, and 5 responses from non-metropolitan universities. This may be considered a representative cross-section of Australian universities. In July 2004, the Society published its report on the survey (ESA, 2004).

In sections II and III below, I outline the nature of this survey of student standards and describe the main results. Section IV discusses issues in preparing and conducting this kind of survey including the reliability of the results. Section V discusses the major policy issues relating to student standards identified in the survey. There is a brief concluding section.

In developing the survey, ESA recognised that the university economics market comprises several sub-markets, which may be subject to different influences. First year undergraduate students often include large numbers doing economics courses as part of other degrees. On the other hand, most students in third year economics courses intend to graduate with an economics qualification. Students studying for Honours, Masters and Ph.D qualifications are likewise seeking an economics qualification but may exhibit different standards (relative to what might be thought appropriate) and be subject to difference constraints. Accordingly, the Society designed the survey to elicit answers about student standards for each main sub-market. The survey focused on first and third year undergraduate students and masters by coursework students. It also provided an opportunity for respondents to provide information on Honours and Ph.D students.

The survey questionnaire along with the survey report can be found on the ESA website (www.ecosoc.org.au). The main features of the survey are briefly summarized below. In order to provide perspectives on the nature and size of these sub-markets, Section A of the survey sought information on the numbers of students taking economics courses in various undergraduate years and in postgraduate courses, the proportion of international students in each year, and student-staff ratios.

Section B addressed the standards of work achieved by first and third year undergraduates and masters by coursework students. To assess standards, respondents were asked to use the following guidelines.

● Very good – a high distinction or distinction standard of work, 75 plus out of a 100

● Good – a credit standard of work, 65-74 out of a 100

● Satisfactory – work that is worth 50-64 out of a 100

● Poor – work that is worth 40-49 out of a 100

● Very poor – work generally below 40 out of 100

Respondents were asked to judge the percentage of students in each of these five categories and whether standards had changed over the last 10 years. They were also asked whether their answers were based on general experience or specific evidence.

Section C seeks to determine the major factors that influence the standards achieved by the various categories of students. The questionnaire seeks responses on eight or nine potential factors in each case (for example entry standards, linguistic ability, faculty resources and so on) and provides respondents with the opportunity to describe other factors that might affect standards. The questionnaire also asks respondents if departments have adopted special strategies to achieve desired standards.

Section D seeks views on policies for the maintenance or improvement of standards at the various student levels. The questionnaire provides some ten possible policies for each student group (including various accreditation and review procedures) and provides respondents with the opportunity to describe other policies.

Section E asks respondents for comments on standards of work achieved, determining factors, and possible strategies for Honours and Ph.D students.

Throughout the survey, respondents were invited to provide additional comments either to clarify their quantitative answers or to provide additional qualitative material that was not explicitly asked for in the questions. These additional comments added considerably to the richness of the responses (see Economic Society of Australia, 2004).

A critical feature of the survey was the pledge of confidentiality. The Society told respondents that only the President, Secretary and Administrator of Central Council and three independent university professors, who would review the draft report for accuracy and quality, would view the responses. In addition, the draft report was circulated before publication to all respondents to ensure that none were misquoted and that no individuals or institutions were identified in the report. Respondents were told that the final report would be a public document.

Economic students and staff numbers

Most economics departments have over 1000 first year undergraduate students and nearly all have over 500 students. In third year, there are usually fewer than 500 economics students and the median number is between 100 and 199 students.

International students typically constitute over 30 per cent of all students in all levels of economics studies, except in Honours degrees. The proportion of international students tends to be highest in first year undergraduate studies and in Ph.D studies.

Of the 17 respondents reporting student-staff ratios, all but one reported an EFTS – staff ratio in excess of 20 and nine respondents reported an EFTS – staff ratio over 30. Even allowing for increased employment of casual staff, these student-staff ratios are high historically. They are also very high compared with those in most other faculties in universitiesas we have seen the university-wide average in 2003 was 21.3.

Student standards

Respondents reported a broad distribution of standards, especially among first year students. Seventeen of the 20 respondents to this issue reported that 30 per cent or more of their first year students are good or very good (defined in the survey as credit grade students or higher). On the other hand, eleven departments reported that 30 per cent or more of their first year students are poor or very poor (defined as students likely to fail their courses). A further five departments reported that between 20 and 29 per cent of students were in the poor or very poor categories.

For third year students, the respondents reported a similar proportion of good or very good students but slightly fewer poor or very poor students. Assessments of students undertaking masters by coursework were more mixed, with respondents reporting a variety of experiences.5

Table 1 provides some summary statistics. The numbers in this table are the means of the estimates provided by all respondents, not weighted for numbers of students in the departments.

Table 1 Estimated mean percentages of students in each standard in 2003 (% of students)

Standard

First year

Third year

Masters (coursework)

Very good

12.7

17.7

22.9

Good

22.1

30.4

24.5

Satisfactory

37.9

35.0

33.0

Poor

15.0

9.5

10.9

Very poor

12.2

4.8

8.7

Total

100.0

100.0

100.0

Source: Economic Society of Australia (2004).

Overall, the responses suggest that standards in undergraduate courses have fallen over the last ten years (see Table 2). Respondents for thirteen departments consider that standards in first year courses have fallen compared with respondents for only three departments who considered standards have risen. Eight respondents consider that standards in third year courses have fallen, whereas only four respondents judged that they have risen.

Table 2 Changes in Student Standards over last 10 years (1994-2003)

 

Changes in student standards

 

Sub-market

Risen significantly

Risen a little

Stayed constant

Fallen a little

Fallen significantly

Total responses

First year

1

2

3

10

3

20

Third year

2

2

6

8

0

19

Masters

0

4

2

1

1

8

Source: Economic Society of Australia (2004).

On the other hand, out of eight respondents about masters’ coursework courses, four judged that standards have risen compared with two who judged they have fallen. It should be noted also that most respondents considered that standards of Honours and Ph.D students have been maintained. Departments are keen to protect their reputation for these students and staff and students are well motivated in these courses.

Importantly, respondents were asked to state whether their assessments were based on experience or evidence. As shown in Table 3, most judgments were based on experience.6 Where evidence was cited, it related mostly to an assessment of results over time. However, this raises questions about the consistency of grades over time. In many cases grades are determined endogenously, with predetermined proportions of the students being awarded various grades. Few respondents cited other evidence. The issue of evidence is quite crucial to the debate on standards. There is surely a very strong case for more specific research into student standards, a point to which I return with some examples below.

Table 3 Judgments based on experience or evidence

Sub-market

Judgments based on

Total

 

Experience

Evidence

Experience /evidence

 

First year

13

2

4

19

Third year

14

2

3

19

Masters

6

1

1

8

Source: Economic Society of Australia (2004).

Factors determining student standards

Table 4 shows the numbers of respondents citing factors affecting standards of economics students. Most concerns were expressed about first year students. In this category, out of 20 respondents, 15 considered that high student-staff ratios were important or very important, 14 rated poor English of international students as important or very important, and 13 considered that competition with other subjects had an important or very important impact on (lowering) standards. Qualitative responses indicated particular concerns about the way that business studies have resulted on lower standards.

Other factors that are assessed to contribute to lower standards in first year (where applicable) are low entry standards of international and local students and low student work hours. Many survey responses highlighted declining levels of student application as a major concern and an important determinant of standards.

Similar factors are rated important for third year undergraduate students. Here, poor English standards of international students, low student work hours, high student-staff ratios and competition with other subjects are commonly cited as important or very important factors in standards.

Views on standards of masters’ coursework students were more mixed and there were fewer responses. The responses indicate some concern about entry standards and English standards. But given the small number of responses, generalisations are not appropriate.

Policy options and practices

A theme of the responses is that each institution needs to do the things that best reflect the backgrounds and objectives of their particular students. Some respondents expressed the view that market signals would promote appropriate standards. Although there was some support for external reviews of programs, there was little support for external accreditation or exams.

Table 4 Numbers of respondents citing factors affecting standards of economics students

First year students

Very Important

Impor-tant

Minor importance

Not important

Not applicable

Total

Low entry standards local students

5

4

3

4

3

19

Low entry standards int. students

6

3

7

1

3

20

Poor English of international students

8

6

5

0

1

20

University standards on failure rates

1

2

6

9

1

19

University cross subsidies

2

3

4

6

5

20

Low student work hours

5

3

3

5

3

19

High student-staff ratios

5

10

2

2

1

20

Competition with other subjects

8

5

4

2

1

20

             

Third year students

           

Low entry standards local students

0

4

4

3

4

15

Low entry standards int. students

2

3

6

1

3

15

Low standards private transfers

2

1

6

3

3

15

Poor English of international students

4

4

5

2

2

17

University standards on failure rates

1

1

6

5

2

15

University cross subsidies

1

1

1

6

5

14

Low student work hours

3

3

2

2

5

15

High student-staff ratios

2

5

3

1

4

15

Competition with other subjects

2

4

5

1

3

15

             

Masters coursework students

           

Low entry standards local students

2

0

2

2

0

6

Low entry standards int. students

3

0

2

1

0

6

Poor English of international students

2

2

2

0

0

6

University standards on failure rates

0

0

3

1

1

5

University cross subsidies

0

0

2

2

2

6

Low student work hours

1

0

2

1

2

6

High student-staff ratios

0

4

2

0

0

6

Competition with other subjects

0

2

1

1

1

5

Joint with other Masters degrees

1

0

0

2

3

6

Source: Economic Society of Australia (2004).

Table 5 shows the numbers of respondents citing policies for maintaining or raising student standards. As would be expected, the preferred policies reflect respondents’ judgments on the determinants of standards. For first year students, out of 20 respondents, 15 considered that lower student-staff ratios are important or very important, 14 cite higher English standards as important or very important, and 11 cite higher entry standards for international students as important or very important. Higher entry standards for local students were also considered important.

Policy preferences for third year students are similar. Lower student-staff ratios, higher entry standards for students (especially for international students) and higher English language requirements were cited as the most important policies. There was some support for external reviews of courses, but little support for and strong opposition to accreditation of degrees, and little support for the idea of a common external exam.

Entry standards and English language requirements are again an issue for Master’s students, albeit that the sample of respondents is small. Again, there was little support for external reviews of any kind.

Table 5 Numbers of respondents citing policies for maintaining and raising student standards

 

Very Important

Impor-tant

Minor importance

Not important

Not applicable

Total

First year students

           

Raise entry standards local students

3

4

8

3

2

20

Raise entry standards int. students

7

4

4

2

3

20

Raise English language requirements

9

5

4

1

1

20

Higher failure rates

3

2

6

6

2

19

Reduced cross subsidies

3

5

4

4

4

20

Lower student-staff ratios

5

10

4

1

0

20

External accreditation of courses

5

2

6

6

1

20

External reviews of courses/standards

1

7

8

4

0

20

External exam for 1st year students

2

2

5

10

1

20

Award certificates of attendance

0

3

2

9

5

19

             

Third year students

           

Raise entry standards local students

1

5

5

3

2

16

Raise entry standards int. students

4

4

4

1

3

16

Raise English language requirements

4

4

6

1

2

17

Higher failure rates

2

3

5

5

2

17

Reduced cross subsidies

1

4

1

5

4

15

Lower student-staff ratios

4

6

4

1

1

16

External accreditation of courses

3

2

4

6

1

16

External accreditation of degree

3

0

4

6

2

15

External reviews of courses/standards

2

5

5

3

1

16

External exam for 3rd year students

2

1

2

8

2

15

Award certificates of attendance

1

1

3

7

2

14

           

Masters by coursework students

           

Raise entry standards local students

2

1

3

2

0

8

Raise entry standards int. students

4

1

3

1

0

9

Raise English language requirements

2

2

3

1

0

8

Higher failure rates

1

1

2

3

0

7

Reduced cross subsidies

0

0

1

4

1

6

Lower student-staff ratios

0

3

3

1

1

8

External accreditation of courses

1

1

1

4

0

7

External accreditation of degree

1

1

1

4

0

7

External reviews of courses/standards

1

2

3

1

0

7

External exam for Masters students

2

0

0

5

0

7

Award diplomas to weak students

0

3

0

4

0

7

Source: Economic Society of Australia (2004).

4 Issues in a Survey of Student Standards

Needless to say, many issues arise in a survey of student standards. Some issues are fundamental. They include such questions as ‘what is quality’? What is evidence of quality? What incentives do department heads face when responding to such surveys? How can we tell whether the responses are honest and accurate? Other issues are more pragmatic. They concern the structure, conduct and analysis of such a survey. In this section, I first discuss some pragmatic issues before turning to more fundamental ones.

Some pragmatic issues

It may be noted that in preparing the responses we counted each response as one regardless of the size of the institution. This provides a clearer picture than attempting to weight the responses by say the number of economic students in an institution. In any case, the exact numbers in each institution are not known. But this process may over-weight the views of respondents from institutions with fewer students.

One reason that the numbers are not clear relates to the difficulty of defining economic students or students on economics courses. Universities have different descriptions of such students and we could not offer a single definition of economics students that all respondents would understand unambiguously. Fortunately this did not matter because we were interested only in broad magnitudes of students to ensure that the responses could be understood.

A similar definitional problem arises with student-staff ratios. With increasing numbers of part-time students and staff, student-staff ratios are less precise concepts than they were and comparisons over time and across institutions have to be made cautiously. However, the reported student-staff ratios were generally so high that the implications for potential under-resourcing were clear, despite some imprecision in the measure. Other measures of resources per student may be desirable, but are even harder to obtain.

Another basic issue is the heterogeneity of the sub-markets within commerce faculties. As noted above, the Society attempted to deal with this by identifying five separate student categories (first and third year undergraduates, honours, coursework masters and Ph.D students). This lengthened the questionnaire considerably and three of the eight non-respondents claimed that they did not respond because of the excessive length of the questionnaire. Evidently there is a trade-off between the length and detail of the survey and the response rate.

Inevitably the choice of questions is selective. A reviewer suggested that, in asking about determinants of standards, the survey should have included such possible factors as teaching standards, pastoral care of students, student achievement motivation, and learning style. Arguably, student achievement motivation was included implicitly in student work hours. More generally as noted, respondents were given many opportunities to express their views on potential omitted factors.

Finally, a basic issue in any questionnaire is the clarity of the questions. As noted above we attempted to deal with the concept of standards by defining broad categories such as very good, good etc. and then attempted to elicit an estimated distribution of current standards for each student sub-market. In a draft questionnaire we designed a similar question about the distribution of standards ten years previously. However, on piloting, this was found to assume too much historical knowledge and we substituted a question asking respondents simply whether, in their view, standards overall in each student cohort had risen substantially, risen etc. (see Table 2). Most respondents answered this question without reservations. However, within any given student group, the standards of some students may rise and the standards of others may to fall, and one respondent noted this.

Some fundamental issues

As I noted in the Introduction, this survey was concerned with standards and changes in standards and related issues. Arguably, the more important issue is value adding. Universities may add value even while average standards fall. This is an important but separate issue. However, many similar points would arise – notably the need to test students to determine what is being achieved.

Turning to student or course quality, these concepts are multi-dimensional and are not necessarily clear or agreed. In the words of one respondent quoted in ESA (2004): ‘Standards of what? Research and writing skills have fallen, quantitative and memorisation skills have risen. Standards are not absolute, but directed towards ever-changing ends’. Another respondent noted that students are getting less economics but more practical business studies orientation in their courses and that this could be seen as an improvement. Two other respondents argued that their department’s institutional approach to economics was more useful to students than a more conventional and perhaps abstract neo-classical approach. Such arguments strongly underscored views that decentralised solutions to student standards are both appropriate and desirable.7

Whatever definition of quality is adopted, there remains the issue of what is the evidence of quality? How do universities know what standards are achieved? As Anderson et al noted in their submission to the Senate Committee (2001, p. 176), when asked this question, deans and vice-chancellors ‘invariably had to admit that they had no direct way of knowing about changes in standards from one year to the next; or how their university stacks up against others’. As noted above, most respondents to the Society’s survey based their views on standards on their considerable experience. Few respondents cited firm evidence on standards.

In an attempt to achieve objective measures of student standards, the survey sought data on texts used in core first and second year courses. It was thought that this might be a guide to the levels of economics taught over time in each year. However, several respondents were unable to provide information about texts in use some 15 years ago. Thus it was not possible to draw conclusions on student standards from the survey responses. More fundamentally, as one respondent observed, the ‘real issue is what sorts of questions do we ask and what sorts of answers are we “satisfied” with?’ This suggests that while data on texts could be useful, conclusions on standards would require a more in-depth examination of course materials.

I might here cite two surveys that I have conducted to try to understand high failure rates and low standards in some courses that I run. One was a survey of student work hours in a core second year course with 300 students. This survey found that the median workload for a standard university course in 2001 was only 5 hours per week compared with the traditional expected 12 hours per week. My faculty replicated this survey across all main courses and arrived at a similar finding. Secondly, in 2002 I conducted two vocabulary tests set for me by the Linguistics Department (Macquarie University) as adjuncts to multiple choice economics tests. We found that 37 per cent of all students in the course of some 300 students, and over half of all international students, were likely to fail the course due to poor vocabulary alone. More such tests could provide important material on student standards.

In the absence of such tests, is it possible to rely on the responses of heads of departments with regard to standards? As the Senate Committee (2001, p. 157) point out, ‘many academics are unwilling to speak out for fear of bringing their institutions into disrepute’. Indeed, more than this, many academics (like their universities) have a personal financial stake in greater student numbers regardless of quality. Heads of departments, in particular, are appointed inter alia to promote their department’s reputation and financial interests. They may be reluctant to note potential negatives in performance, most obviously if there is any possibility of identification. In any case, they may consider that there is more to lose than to gain by providing honest answers. Accordingly some heads of departments facing difficulties with student quality or numbers might be less inclined to respond. Others who do respond might do so strategically.

In these circumstances, confidentiality (and confidence in confidentiality) is essential. One respondent, who expressed explicit concern privately to me about the possible views of his Vice-Chancellor, made his response conditional on the report not presenting an analysis of any differences between types of university (for example the Group of Eight, other metropolitan universities, and non-metropolitan universities). As described above, the process from collation of the responses to publication of the report contained several checks designed to assure confidentiality and anonymity. On the whole this process appeared to minimise strategic responses, if not perhaps completely eliminating them.

Finally, how can the survey agency determine whether the responses are accurate and honest? The obvious way is to ask for evidence but we have seen that this does not often exist. And where it is not readily available, it would be an excessive and counter-productive imposition on respondents to try to extract it.

Other objective tests of freedom from bias are hard to achieve. One test of accuracy of response, though not of bias, is internal consistency of responses. As shown, in the Society survey responses on causes of standards and policies were consistent. Another possible test for bias would be to look for responses that appear inconsistent with external data, for example universities with low entry standards reporting high achieved standards. However, this requires the survey agency to undertake an active vetting role that takes into account any possible explanations of factors in the responses. Apart from the inherent difficulties of such analysis, such a vetting role is not consistent with a professional society’s relationship with its members. In essence the survey must be designed to elicit the necessary information and to report this. The survey agency cannot discount individual responses on the grounds that they appear to be inconsistent with externally generated information.

It is axiomatic for most economists that policies are required only when there are problems that markets cannot fix and when, following Adam Smith, the cure is better than the disease. In the words of one survey respondent:8

  • we can find a problem let’s fix it. But we must identify the problem first. Romantic academics musing that standards were higher when we educated fewer people is not a problem. Are we satisfying the people who pay the bills; students, employers, parents, taxpayers? If not, then how not, then why not, and only then how do we fix it?’

In other words is there really a problem with student standards? And, is there market failure?

Some survey respondents considered that there is no serious problem as standards have improved. A weaker view would be that although average standards may have fallen, no one is disadvantaged because the stronger students achieve the same (or better) standards that they did previously. However, these do not appear to be majority views. Most respondents considered that standards have an objective basis, that standards have fallen in many universities and for many students, and that this is a matter for concern. I share this latter position. In my experience the standards required of students have fallen and the resources are not available to provide separate classes for better students within the one department. A possible resolution of this would be implicit streaming of students via universities with each university serving a more homogeneous population. This would be a de facto market solution, but it is hard to tell whether this is occurring.

An important related issue is whether the market is efficient and can be relied upon to supply and determine efficient standards. As one respondent remarked:

‘Let the market rule. Avoid credentialism and the temptation to centralise. Let everyone choose their own brain surgeon regardless of qualifications.’

In my view this places too much faith in the effectiveness of market mechanisms and signals in the regulated university education sector. Another respondent commenting on the variety of standards in masters programs noted that:

‘The market is currently pretty poorly informed about these differences (in masters programs) as often are the students themselves.’

The critical questions are whether markets can recognise the differential qualities of degrees and whether this in turn affects the behaviour of university administrations, staff and students. The prices for courses in Australian universities are similar and send limited signals to students. While local employers may have a fair idea of the value of many degrees, overseas employers may not. In any case, it appears that many overseas students prize an Australian degree as a potential migration ticket for which the standard of the degree is largely immaterial. On the other hand, many academics including the writer believe that university administrations are largely motivated by revenue maximisation rather than by quality objectives.9 Given price controls on degrees, revenue is maximised by maximising turnover. As Gare observed in his submission to the Senate Committee (2001, p.165):

‘when a university’s goal is defined as satisfying customers or clients to generate the maximum throughput and to maximise profits, it clearly pays to pass as many students as possible and to focus on those students who want to get their degrees with the minimum amount of work’.

If market signals are not working, who is losing? First, because of the public good (positive externality) nature of education, society at large loses from inferior standards of education. Some people even argue that education in schools suffers from the decline in standards in universities. Second, losers include students who take part in poorer classes than they would otherwise. The quality of discussion in the class inevitably falls. These information failures combined with the externality characteristics of education mean that, in the current institutional framework, decentralised revenue-maximising institutions and market forces are unlikely to produce appropriate student standards.

Turning to the policy issues discussed in the ESA survey, four main causes of low standards and related policy issues are taken up here.10

1. Low entry standards, including poor English – raise entry standards.

2. Lack of resources to deal with these issues – increase resources.

3. Low student inputs – require more student work.

4. Low passing standards - policies to raise grade standards.

Raising entry standards, including higher English skills

Most survey respondents considered higher entry standards would be desirable. Two-thirds of respondents said that improved English language for international students is important or very important. Also, two respondents argued that the decline in school standards is a major problem for the economics at universities, with one proposing that the profession should ‘take the school system head on  at a national level’. I concur with these views on entry standards, language, and schools and believe that the profession should take a more active role in these issues.

However, respondents also recognised that raising entry standards would often run counter to university policies and that academics have little control over general entry standards. One respondent noted that he has argued for a ‘university-run language test, but this has been regarded as undermining the university’s competitive position’. More pertinently, raising entry standards and thus (possibly) reducing students would reduce departmental revenue, salaries and jobs, especially if prices are insensitive to service quality. Some departments considered that this could affect the viability of the department itself. These are evidently major constraints on the lead that the profession’s Society can take.

Increasing resources and technical improvements

Three-quarters of survey respondents considered that reducing student-staff ratios is important or very important to improving standards. Staff resources have lagged well behind the increase in students. In addition, many poor standard students need more assistance than do more capable students, a point that university administrations are often reluctant to admit. Thirdly, greater variance in student standards means that the stronger students also bear the cost of lower standards unless they can be provided with some differentiated services.

As the survey showed, most economics departments are attempting to maintain standards in various soft or hard technology ways. Methods include putting more high quality effort into our teaching, more written assessment by way of formal essays, student mentoring programs, and web-based courses. However, most conclude that such technical improvements cannot substitute fully for the decline in resources per student.

Raising student inputs

In the last 15 or so years, there has been a major decline in the level of student participation in university work. Recent surveys in Macquarie University have shown that the median full-time student works only about 20 hours a week at university and does 20 hours a week in outside part-time work. Thus, half of all students are studying fewer hours a week. Two years ago, Macquarie University two years ago lowered the study benchmark from 12 to 9 hours per week per course. When I inquired why, I was told that the benchmark had been lowered to more closely reflect student behaviour as revealed in our surveys! Other surveys, including the ESA survey, indicate similar student behaviour at other universities. Inevitably this has had a major impact on student standards.

There is, I believe, an appropriate response to this. Universities could foster a work culture for university students by making student obligations clear to students before they start their university education and continuously thereafter, preferably in the form of a quasi contract. Currently, there is a signalling failure. University marketing often encourages students to enter the university with little idea of the work involved. University administrations provide few upfront explicit work requirements to students. Students are permitted to enroll as full-time students when they are really part-time students. It is not hard to see why. A policy that set explicit work standards for students would run counter to a university’s revenue maximising objective that requires a permissive attitude to student work attitudes.

Raising passing standards

Course grades provide another signalling opportunity. Indeed, it could be argued that if grade standards are appropriate and known, there is no need to attempt to influence student inputs. But this appears unrealistic. If a university accepts low entry standard students and short working weeks, it cannot set grades inconsistent with this. This may be one reason why respondents to the survey did not place a very high priority on raising failure rates (see Table 5 above). While about 30 per cent of respondents considered that raising failure rates is important or very important, others did not assess this so highly.

Unfortunately, grading is another area where the incentive structures are often perverse. As the Senate committee report notes (2001, p.157): ‘the financial incentives for some academics and departments to give passing grades to students whose fees pay for some proportion of the academic salaries are clearly very strong’. In some cases, individual salary supplementation (and individual promotion) is related to student assessments. It is hard to believe that awards of grades in these conditions are not influenced by salary incentives.

External accreditation, reviews, and exams

The ESA survey canvassed three forms of external review: formal accreditation of courses or degrees, external reviews of courses or degrees, and external exams. As can be seen from Table 5, there is some support for external reviews but little support for external accreditation or exams. This reflects the status quo. There is an important difference between these options that may explain preferences for reviews. Departments have more control over reviews than they would have over accreditation or external exams. Reviews are typically based on terms of reference set by the host university and are constrained to review courses subject to the objectives of that university. In some cases, a department may initiate a review as a defence against university administrations.

The lack of support for any form of accreditation is consistent with the long-standing policy of the Society that it should not be involved in accreditation exercises for a variety of reasons. The reasons include that accreditation is anti-competitive; that it either sets standards too high and excludes people or sets then too low and is meaningless; that it may define economics too narrowly; or just that it is too hard to achieve. Some respondents to the Society survey argued very strongly that accreditation would not recognise the diversity of student needs and academic approaches to teaching economics and that any form of central control would be a major error.11 To my mind, the practical problems of determining which courses or degrees to accredit, and which not to accredit, would be particularly severe.12

  • the other hand, in my view there is a case for external examinations. I doubt that market signals from university degrees are adequate. Admittedly, there is a lack of evidence, one way or the other, about the quality of the market signals. Some research on this topic could be helpful. If there are signalling failures, it is questionable whether a system in which university exams are self set and self assessed has the appropriate incentives to produce high standards.
  • externally set and assessed micro and macro exams would provide effective tests and signals of standards as part of the process of completing a first year course and, more especially, for completion of an undergraduate or Masters’ coursework degrees in economics. Externally set and graded exams for secondary school leavers in most Australian states and in many other countries are generally held in high regard. Such exams provide good signals and also promote competition, which in itself should produce higher standards. There may also be a case for employing external exams for entry into real masters’ degrees in economics or a Ph D. research program.
  • ESA survey brought out several contrary views. Opponents of external exams argue that there is a need for differentiation of product and for plurality of process. The core economic approach embodied by the neo-classical tradition, which is the assumed subject of the tests, is said to be losing its relevance for many of our students. There is said to be a need to understand business practices rather than economic principles. Hard external tests would allegedly discourage entry to economics. Simple ones would be inappropriate for better students.

In my view these arguments against externally set and assessed exams are overstated and do not outweigh the potential benefits. However, given the opposition to these exams within Australian universities, to introduce such exams here, the idea may need support from outside the universities.

This paper has reviewed the conduct of a survey of senior academics into student standards, described the main results of the survey, and discussed several policy implications. The survey achieved a 75 per cent response rate that represented a broad range of universities. I argue that the survey largely, but not wholly, overcame the problem of strategic responses.

The survey (see ESA, 2004) contained a large number of findings as well as a rich source of qualitative observations. On balance, the standards of undergraduates appeared to have declined. In many universities, over 30 per cent of students are deemed likely to fail their courses. There was insufficient evidence to draw conclusions about graduate work. In general, and this point cannot be too strongly stressed, more evidence on what is happening is needed. In particular, it is possible that Australian universities are adding value generally to students even while average standards fall. However, there appears to be little evidence that Australian universities assess standards with much rigour.

There are numerous causes of the decline in standards. Prime among them are high student-staff ratios, poor English standards, competition with other subjects, and a declining student culture of university work. It may be observed that these findings could have been expected. However, these and related issues are not well documented and there is little action on many of these issues. Keeping or putting the issues on the policy agenda seems to be a useful exercise. Again more documentation of the relevant facts, for example student-staff ratios and literacy standards, would be useful.

Turning to policy issues, a host of issues arise. By and large the response of the profession is to work within the system to try to increase the availability of resources and the efficiency with which they are applied. In my view, the profession could be engaged more actively in some of the bigger issues such as the decline in standards in the schools and the decline in working culture in the universities, in the latter case by pressing for university-student contracts.

Some survey respondents argue that standards should suit local needs and conditions and that they have not fallen when taking a broader view, for example that institutional economics or business studies are more important than traditional neo-classical economics. Several respondents argue that market forces will promote appropriate standards and that decentralised responses are therefore the best strategy. However, this does not seem to account for a system that appears characterised by signalling failures and perverse incentives, largely but not wholly related to revenue maximisation for both universities and economics departments. While I agree with the profession’s ongoing opposition to accreditation, I believe that there is a stronger case for adopting some external exams than most of my colleagues accept.

Abelson, P. (1996) “Declining enrolments in economics: Australian experience”, Royal Economic Society Newsletter, 95, pp.19-20.

Alauddin, M., and C. Tisdell (2000) “Changing academic environment and teaching of economics at the university level: some critical issues analysed with the help of microeconomics”, Economic Papers, 19, pp.1-17.

Anderson, M. and R.Blandy (1992) “What Australian economics professors think”, Australian Economic Review, 4th quarter, pp.17-40.

Australian Senate Committee (2001) Universities in Crisis, Senate Employment, Workplace Relations, Small Business, and Education Committee, Australian Parliament, Canberra, www.aph.gov.au/senate/eet_ctte/public_uni/report/contents.htm

Bloch, H. and T.Stromback (2002) “The economics of strategy and the strategy of economics”, Economic Papers, 21, pp.1-10.

Economic Society of Australia (2004) A Survey of Student Standards in Economics in Australian Universities in 2003, Economic Society of Australia, Sydney, www.ecosoc.org.au.

Guest, R., and A. Duhs (2002), “Economics teaching in Australian universities: rewards and outcomes”, Economic Record, 78, pp.147-60.

Lewis, P. and K. Norris (1997) “Recent changes in economics enrolments”, Economic Papers, 19, pp.43-52.

Millmow, A. (2002) “The disintegration of economics”, Economic Papers, 21, pp.61-9.

Previous PageTop Of PageNext Page