Insights from First-Semester Computer Science End ...

1 downloads 22918 Views 146KB Size Report
“equivalent to a first-‐semester college-‐level course in computer science,” what is ..... and experiences that best support students' acquisition of the concept and ...
INSIGHTS  FROM  FIRST-­‐SEMESTER  COMPUTER  SCIENCE  END-­‐OF-­‐COURSE  EXAMS     Jody  Paul   Department  of  Mathematical  and  Computer  Sciences   Metropolitan  State  University  of  Denver   Denver,  CO  80217   303  556-­‐8435   [email protected]     ABSTRACT   This  paper  reports  insights  gained  from  the  scoring  of  83,000  end-­‐of-­‐course   examinations  of  introductory  computer  science  students.  The  information  was   gleaned  specifically  from  experience  of  the  Chief  Reader  for  Advanced  Placement®   Computer  Science  with  responsibility  for  the  2009  through  2012  examinations.   Common  themes  in  the  associated  student  performance  include  difficulties  with:   array,  list  and  string  processing;  parameters,  local  variables,  and  return  values;   iteration  control  structures;  the  use  of  null;  object-­‐oriented  programming;  and   addressing  problem  specifications.  The  paper  concludes  with  potential  hypotheses   for  future  investigations.     INTRODUCTION   Introductory  programming  courses  are  fraught  with  difficulties  for  students   and  challenges  for  instructors.[3,  7,  14,  16,  18,  26]  The  growing  demand  for   computer  science  graduates  has  focused  additional  attention  on  the  contribution  of   introductory  programming  courses  to  declining  enrollment  and  high  attrition  in   computer  science  degree  programs.[15,  19,  26]  Efforts  to  understand  the  nature  of   novice  experiences  in  learning  to  program  have  led  to  numerous  approaches  to   facilitating  the  acquisition  of  programming  fundamentals.[1,  4,  9,  10,  15,  16,  20,  25,   27,  28,  29,  30]  A  particular  area  of  study  involves  identification  of  errors  made  by   such  students  and  potential  mitigation  strategies.[2,  5,  6,  8,  13,  17]  This  paper  adds   observational  insights  based  on  a  large  number  of  standardized  end-­‐of-­‐course   exams  and  poses  questions  that  suggest  hypotheses  for  future  investigations.   The  course  “Computer  Science  A”  [31,  32,  37]  exists  within  the  Advanced   Placement  Program®  of  the  College  Board®  which  “enables  students  to  pursue   college-­‐level  studies  —  with  the  opportunity  to  earn  college  credit,  advanced   placement  or  both  —  while  still  in  high  school.”[36]  Computer  Science  A  is   “equivalent  to  a  first-­‐semester  college-­‐level  course  in  computer  science,”  what  is   typically  labeled  Computer  Science  1  and  is  explicitly  designed  such  that  the  “course   curriculum  is  compatible  with  many  CS1  courses  in  colleges  and  universities.”[33]   The  Computer  Science  A  course  is  typically  administered  over  a  full  year  in  the  high   school  setting,  as  compared  to  the  CS1  course  which  usually  is  over  a  single   semester.    The  associated  end-­‐of-­‐course  exam  [35]  is  standardized  and  rigorously   administered.  It  is  three  hours  long  and  made  up  of  two  parts:  a  multiple-­‐choice   section  (1.25  hours)  and  a  constructed  response  section  (1.75  hours).[34]     (Constructed-­‐response  items  are  known  as  “free-­‐response  questions”  in  this  context  

and  the  terms  are  used  interchangeably.)  There  are  40  multiple-­‐choice  items  and  4   constructed-­‐response  items  per  exam  allowing  for  about  2  minutes  per  multiple-­‐ choice  item  and  26  minutes  per  constructed-­‐response  item.  The  Chief  Reader  for   Computer  Science  contributes  to  the  development  of  the  assessment  items  that   appear  on  the  exam  and  is  responsible  for  the  development  and  application  of   scoring  rubrics  for  the  constructed  response  items.  The  scores  of  the  two  parts  are   combined  and  an  equating  process  applied  that  maps  the  composite  score  into  a   reported  overall  score  in  the  range  1  to  5.   The  specific  intent  of  this  end-­‐of-­‐course  exam  is  to  make  recommendations   as  to  credit  and  placement  of  the  test-­‐taker  via  a  comparison  assessment.  The   objective  is  to  determine  knowledge  and  ability  equivalence  of  a  test-­‐taker  with  a   student  from  the  target  college  population.  For  example,  an  overall  exam  score  of  5   equates  to  college  students  who  achieve  a  grade  of  “A”  in  the  equivalent  course.   The  remainder  of  this  article  describes  the  process  of  creating  and  scoring   the  examinations  and  identifies  common  themes  in  the  performance  of  students   over  the  2009-­‐2012  exam  administrations.  Java  was  the  specific  programming   language  used  in  each  of  the  2009-­‐2012  exams.     THE  EXAMINATIONS   The  process  of  constructing,  validating,  administering,  and  scoring  Advanced   Placement  Computer  Science  examinations  is  well-­‐established  and  highly  labor   intensive.  The  exam  development  committee  is  comprised  of  computer  science   educators  from  secondary  and  higher  education,  content  and  assessment  specialists   from  Educational  Testing  Service  and  the  College  Board,  and  the  Chief  Reader  as  ex   officio  member.  Development  of  the  examination  involves  creation  of  new   assessment  items  —  both  multiple  choice  and  constructed  response  —  and   formulation  into  the  standardized  exam  structure.  Multiple-­‐choice  questions  are   pre-­‐tested  at  universities  and  colleges  before  they  appear  on  an  actual  exam.   Constructed  response  questions  do  not  go  through  this  additional  validation  step   due  to  concerns  of  potential  disclosure.  Once  the  exam  has  been  constructed  and   passed  through  a  rigorous  vetting  process,  it  is  administered  worldwide.  All   constructed  response  items  from  each  of  the  exams  are  available  online.[38,  39,  40,   41]  The  length  of  a  single  item  (multiple  printed  pages)  precludes  inclusion  of  an   example  here.  The  multiple-­‐choice  questions  are  computer-­‐scored.  The  constructed   response  items  require  human  scoring  based  on  well-­‐defined  rubrics.  This  is   performed  by  computer  science  faculty  from  high  schools  and  higher  education   institutions  following  extensive  training  and  subject  to  continuous  assessments  of   consistency  and  accuracy.     RUBRICS   Associated  with  each  constructed  response  item  is  an  assessment  rubric   crafted  by  the  Chief  Reader  and  validated  with  respect  to  the  intents  of  the  item  and   exam.  Key  rubric  evaluation  metrics  include:  the  applicability  of  the  rubrics  to  the   full  range  of  responses;  the  correlation  of  rubric  scores  with  the  intent  of  the  item;   and  the  facility  and  consistency  with  which  the  rubrics  can  be  applied.  The  rubrics   are  re-­‐validated  once  actual  student  responses  are  available  and  modified  as  

necessary  prior  to  the  actual  scoring.  By  historical  convention,  free-­‐response   questions  are  scored  on  a  scale  from  0  to  9.  The  full  set  of  rubrics  and  additional   scoring  guidelines  are  available  online.[42,  43,  44,  45]     STATISTICS   The  total  number  of  Computer  Science  A  examinations  considered  in  this   article  is  83,021,  covering  the  exams  administered  from  2009  through  2012.  The   distribution  of  exams  over  those  four  years  is  shown  below:     2012:  26,103   2011:  22,176   2010:  20,120   2009:  16,6221     Recall  that  the  intent  of  the  exams  is  that  of  making  recommendations  as  to   credit  and  placement  via  comparison  assessment.  As  such,  exam  scoring  is  intended   to  facilitate  determining  equivalence  classes  across  student  populations.  One  set  of   equivalences  is  between  Computer  Science  A  test-­‐takers  and  those  students  who   have  just  completed  a  CS1  course  at  a  college  or  university.  For  example,  an  overall   score  of  4  on  the  Computer  Science  A  exam  should  equate  to  a  course  grade  of  “B”  in   the  college  course.  Another  set  of  equivalences  is  across  test  administrations;  that  is,   an  overall  score  should  equate  to  the  same  score  regardless  of  the  year.  For  example,   an  overall  score  of  4  in  2010  should  equate  to  the  same  overall  score  of  4  in  2012.   Given  this  objective,  the  most  effective  scoring  rubrics  for  constructed-­‐ response  items  are  those  that  provide  the  greatest  discrimination,  such  as  afforded   by  a  uniform  distribution.  Since  each  response  is  scored  on  a  scale  from  0  to  9,  a   desirable  mean  score  for  an  item  is  close  to  4.5  and  scores  should  be  well-­‐ distributed  across  the  range.  The  means  and  standard  deviations  for  each  of  the   2009-­‐2012  items  are  shown  in  Figure  1  and  demonstrate  that  the  combination  of   items  and  scoring  rubrics  are  performing  much  as  desired.[46,  47,  48,  49]    

                                                                                                                1  An  additional  5,105  exams  for  the  Computer  Science  AB  course  were  also  scored  in   2009.  Computer  Science  AB,  now  defunct,  was  equivalent  to  the  two  first  year   college-­‐level  courses  in  computer  science,  CS1  and  CS2/Data  Structures.  

Year   Question  #   Mean   Standard  Deviation   2012   Q1   4.12   3.16   Q2   4.26   3.07   Q3   4.25   3.06   Q4   5.17   3.23   2011   Q1   5.29   3.19   Q2   3.35   2.74   Q3   3.99   3.15   Q4   3.53   3.22   2010   Q1   5.46   3.46   Q2   6.34   3.00   Q3   5.84   3.25   Q4   3.64   3.30   2009   Q1   4.70   3.13   Q2   4.64   2.89   Q3   4.75   3.40   Q4   4.00   3.27   Figure  1.    Statistics  for  Computer  Science  A  Free-­‐Response  Questions  2009-­‐2012     COMMON  ERRORS   The  Chief  Reader  reflects  on  the  student  performance  outcomes  each  year   and  generates  a  “Student  Performance  Q&A”  that  addresses,  for  each  free-­‐response   question:  the  intent  of  the  question;  how  well  students  performed  on  the  question;   common  student  errors  and  omissions;  and  messages  to  teachers  that  might  help   them  to  improve  the  performance  of  their  students  on  future  exams.  These  reports   are  published  online  [21,  22,  23,  24]  and  the  information  is  also  disseminated  in   conference  presentations  and  professional  development  workshops.   The  most  commonly  occurring  errors  in  student  responses  to  the  16  free-­‐ response  questions  from  the  2009-­‐2012  exams  are  identified  in  Figures  2  through  6.   They  are  organized  by  content  area  and  span  the  gamut  from  identifying   requirements  in  a  problem  statement  to  implementation  language  constructs.  This   presentation  of  the  information  demonstrates  the  breadth  of  the  issues  involved.   These  issues  were  identified  as  most  pervasive  and  persistent  across  the   multiple  exam  administrations.  Although  this  article  is  based  on  detailed  knowledge   from  direct  experience  with  the  2009-­‐2012  exams,  it  is  worth  noting  that  the   Student  Performance  Q&A  for  2013  and  2014  [11,  12]  indicate  persistence  of  this   set  of  common  errors.    

Arrays  

Lists  

Strings  

• Confusion  of  array  indices  with  array  elements   • Multi-­‐dimensional  arrays  treated  as  though  one-­‐dimensional   • Assuming  2D  arrays  have  both  dimensions  the  same  (square)   • Incorrect  array  declaration/construction/access   • Confusion  with  lists  (syntactic  and  semantic)   • Confusion  with  strings  (operations)   • Unintended  multiple  insertions   • Incorrect  attempt  to  remove  element  from  list   • Incorrect  list  declaration/construction/access   • Concurrent  modification  exceptions  (e.g.,  when  using  enhanced-­‐for)   • Confusion  with  arrays  (syntactic  and  semantic)   • Attempting  to  use  “equals”  to  determine  relative  order   • Improper  use  of  results  of  “compareTo”  method     • Confusion  with  array  operations  

Figure  2.  Common  Errors  in  2009–2012  Computer  Science  A  Exam  Responses  —   Specific  Data  Structures     • Failure  to  address  list  with  no  elements   • Failure  to  address  empty  string   Boundary   • Failure  to  address  insertion  at  end  of  list  or  end  of  string   Conditions   • Index  out-­‐of-­‐bounds  (array,  list,  substring)   • Neglecting  consideration  of  negative  values   • Inappropriate  initialization  when  searching  for  extreme  values   • Violating  loop  bounds  and  off-­‐by-­‐one  errors   • Failure  to  distinguish  for,  while,  and  enhanced-­‐for  loops   Iteration  and   • Inappropriate  re-­‐initialization  of  variable  within  body  of  loop   Looping   • Premature  exit  (e.g.,  return  statement  within  body  of  loop)   • Difficulty  with  writing  correct  nested-­‐loop  constructs  

Figure  3.  Common  Errors  in  2009–2012  Computer  Science  A  Exam  Responses  —   Loops  &  Bounds     Problem   Solving   Logic  

• Solutions  that  fail  to  address  the  stated  objective   • Solutions  that  violate  specified  constraints   • Solutions  that  do  not  implement  correct  algorithmic  logic   • Using  constants  instead  of  parameterized  data   • Mismatch  with  specified  API  (e.g.,  return  and  parameter  types)   • Incorrect  compound  Boolean  expressions  

Figure  4.  Common  Errors  in  2009–2012  Computer  Science  A  Exam  Responses  —   Problem  Solving  &  Logic    

• Confusion  between  primitive  data  and  object  references   • Not  using  accessor  methods   Object   • Invoking  instance  methods  on  a  class   Oriented   • Not  creating  instance  variables  necessary  to  maintain  state   Design  and   • Improperly  and  incompletely  overridden  methods   Programming   • Missing  or  improperly  formed  constructors   • Use  of  local  variables  when  instance  variables  needed   • Assigning  the  null  literal  to  variables  of  primitive  type   • Comparing  primitive  values  to  null   Null   • Failing  to  check  for  null  elements  of  a  collection   • Failing  to  check  for  null  instance  data   • Omitting  a  return  value  of  null  when  is  appropriate  to  do  so  

Figure  5.  Common  Errors  in  2009–2012  Computer  Science  A  Exam  Responses  —   Object  Oriented  Programming  &  Null    

General  

• Missing  or  inappropriate  initializations   • Missing  or  incorrect  variable  declaration   • Incorrect  algorithm  to  find  extreme  value  (minimum  or  maximum)   • Failure  to  convert  values  to  double  prior  to  division  when  necessary   • Confusion  over  use  of  absolute  value   • Failure  to  use  modulus  operation  when  appropriate  

Figure  6.  Common  Errors  in  2009–2012  Computer  Science  A  Exam  Responses  —   General       IMPLICATIONS   The  persistence  of  the  same  types  of  errors  over  multiple  years  raises   questions  regarding  why  these  problems  continue  to  persist  and  whether  remedies   exist  to  mitigate  or  eliminate  them  in  the  future.  Such  questions  include:   • Are  instructors  actually  aware  of  the  prevalence  of  these  errors  in  student   performance?   This  challenges  an  assumption  that  the  commentaries  provided  in  the   “Student  Performance  Q&A”  reports  and  wealth  of  previously  cited  literature   are  reaching  the  intended  audience  and  that  they  contain  information  of   practical  value  in  an  accessible  way.   • Are  the  observed  errors  more  indicative  of  lack  of  concept  understanding  or   difficulty  with  task  performance?   This  suggests  creating  assessment  vehicles  capable  of  distinguishing  between   conceptual  understanding  and  the  use  of  that  knowledge  when  constructing   computer  programs.  Knowing  the  contribution  of  each  to  the  observable   performance  can  inform  instructional  practices.   • Are  the  standard  pedagogies  for  these  common  problem  areas  responsible  for   engendering  or  exacerbating  the  erroneous  behaviors?   Answering  this  question  requires  looking  into  current  teaching  practices  that   impact  the  most  widespread  errors  to  see  what  characteristics  may  influence   the  outcomes  (e.g.,  amount  of  time  on  task).  

• Are  there  pedagogical  practices  that  better  address  some  or  all  of  these  key   problem  areas?   This  question  embodies  hopes  for  alternative  educational  approaches  that   may  improve  student  understanding  and  performance.  Perhaps  continued   focus  on  specific  erroneous  behaviors  will  yield  pragmatic  approaches  that   better  address  them.   • Is  there  an  intrinsic  relationship  between  the  nature  of  the  current  pedagogical   ecosystem  and  the  common  errors  in  student  performance?   This  questions  whether  the  problem  may  be  endemic.  If  so,  substantive   improvement  might  require  a  fundamental  restructuring  rather  than  item-­‐by-­‐ item  mitigations.       CONCLUSIONS   The  pervasiveness  and  persistence  of  the  identified  issues  indicate  that  the   extant  learning  environments  and  practices  do  not  adequately  address  the  problems.   This  points  to  the  need  for  further  study  to  isolate  and  identify  effects  of  particular   pedagogical  techniques  regarding  these  most  problematic  areas.  One  interpretation   of  the  collected  information  suggests  the  potential  utility  of  taking  each  identified   item  and  engaging  in  a  focused  investigation  of  pedagogical  approaches,  materials,   and  experiences  that  best  support  students’  acquisition  of  the  concept  and  ability  to   apply  the  knowledge  in  practice.  For  example,  a  focused  study  could  examine  how   the  concept  of  “null”  is  introduced,  explained,  used,  and  practiced.  Another  such   study  could  address  how  much  and  what  type  of  attention  is  paid  to  the   transformation  from  natural  language  problem  statements  to  the  design  of  solutions   and  from  solution  designs  to  implementations  in  code.  Alternatively,  the  same   information,  interpreted  in  the  context  of  the  published  body  of  recommended   pedagogical  techniques,  might  suggest  a  fundamental  rethinking  of  the  nature  of  and   foundation  upon  which  current  introductory  computer  science  courses  are  taught.     REFERENCES   [1]  Baldwin,  L.  P.,  Macredie,  R.  D.,  Beginners  and  programming:  Insights  from  second   language  learning  and  teaching,  Education  and  Information  Technologies,  4  (2),   167–179,  1999.   [2]  Bayman,  P.,  Mayer,  R.  E.,  Novice  Users’  Misconceptions  of  BASIC  Programming   Statements,  Report  82-­‐1,  University  of  California  Santa  Barbara,  1982.    [3]  Bennedsen,  J.,  Caspersen,  M.  E.,  Kölling,  M.  (Eds.),  Reflections  on  the  Teaching  of   Programming:  Methods  and  Implementations,  New  York,  NY:  Springer,  1998.    [4]  Bonar,  J.,  Soloway,  E.,  Uncovering  principles  of  novice  programming,  POPL  '83   Proceedings  of  the  10th  ACM  SIGACT-­‐SIGPLAN  symposium  on  principles  of   programming  languages,  10-­‐13,  1983.   [5]  Bonar,  J.,  Soloway,  E.,  Preprogramming  knowledge:  A  major  source  of   misconceptions  in  novice  programmers,  Human-­‐Computer  Interaction,  1  (2),   133-­‐161,  1985.   [6]  Bringula,  R.  P.,  Manabat,  G.  M.  A.,  Predictors  of  Errors  of  Novice  Java   Programmers,  World  Journal  of  Education,  2  (1),  2012.  

[7]  Conway,  R.  W.,  Introductory  instruction  in  programming,  ACM  SIGCSE  Bulletin,  6   (1),  1974.   [8]  Danielsiek,  H.,  Paul,  W.,  Vahrenhold,  J.,  Detecting  and  understanding  students’   misconceptions  related  to  algorithms  and  data  structures,  SIGCSE  ’12   Proceedings  of  the  43rd  ACM  technical  symposium  on  Computer  Science   Education,  21-­‐26,  2012.   [9]  Haberman,  B.,  Averbuch,  H.,  The  case  of  base  cases:  Why  are  they  so  difficult  to   recognize?  Student  difficulties  with  recursion,  SIGCSE  Bulletin,  34  (3),  84–88,   2002.   [10]  Jerinic,  L.  Teaching  introductory  programming:  Agent-­‐based  approach  with   pedagogical  patterns  for  learning  by  mistake,  International  Journal  of  Advanced   Computer  Science  and  Applications,  5  (6),  60-­‐69,  2014.   [11]  Johnson,  E.,  Student  performance  Q&A:  2013  AP®  Computer  Science  A  free-­‐ response  questions,  AP  Central,  The  College  Board,  2013,   http://media.collegeboard.com/digitalServices/pdf/ap/apcentral/ap13_comp uter_science_a_qa.pdf,  retrieved  1  June  2015.   [12]  Johnson,  E.,  Student  performance  Q&A:  2014  AP®  Computer  Science  A  free-­‐ response  questions,  AP  Central,  The  College  Board,  2014,   http://media.collegeboard.com/digitalServices/pdf/ap/apcentral/ap14-­‐ comp-­‐sci-­‐qa.pdf,  retrieved  1  June  2015.   [13]  Kaczmarczyk,  L.  C.,  Petrick,  E.  R.,  East,  J.  P.,  Herman,  G.  L.,  Identifying  student   misconceptions  of  programming,  SIGCSE  ’10  Proceedings  of  the  41st  ACM   technical  symposium  on  Computer  Science  Education,  107-­‐111,  2010.   [14]  Kranch,  D.  A.,  Teaching  the  novice  programmer:  A  study  of  instructional   sequences  and  perception,  Education  and  Information  Technologies,  17  (3),   291–313,  2012.   [15]  Koulouri,  T.,  Lauria,  S.,  Macredie,  R.  D.,  Teaching  introductory  programming:  A   quantitative  evaluation  of  different  approaches,  ACM  Transactions  on   Computing  Education,  11  (4),  2011.   [16]  Lahtinen,  E.,  Ala-­‐Mutka,  K.,  Järvinen,  H.-­‐M.,  A  study  of  the  difficulties  of  novice   programmers,  SIGCSE  Bulletin,  37  (3),  14–18,  2005.   [17]  Linn,  M.  C.,  Clancy,  M.  J.,  Can  experts’  explanations  help  students  develop   program  design  skills?,  International  Journal  of  Man-­‐Machine  Studies,  36  (4),   511-­‐551,  1992.   [18]  McCracken,  M.,  Almstrum,  V.,  Diaz,  D.,  Guzdial,  M.,  Hagan,  D.,  Kolikant,  Y.  B.-­‐D.,   Laxer,  C.,  Thomas,  L.,  Utting,  I.,  Wilusz,  T.,  A  multi-­‐national,  multi-­‐institutional   study  of  assessment  of  programming  skills  of  first-­‐year  CS  students.  SIGCSE   Bulletin,  33  (4),  125–180,  2001.   [19]  Nikula,  U.,  Gotel,  O.,  Kasurinen,  J.,    A  motivation  guided  holistic  rehabilitation  of   the  first  programming  course,  ACM  Transactions  on  Computing  Education,  14   (4),  2015.   [20]  Okike,  E.  U.,  A  code  level  based  programmer  assessment  and  selection  criterion   using  metric  tools,  International  Journal  of  Advanced  Computer  Science  and   Applications,  5  (11),  95-­‐98,  2014.  

[21]  Paul,  J.,  Student  performance  Q&A:  2009  AP®  Computer  Science  A  free-­‐ response  questions,  AP  Central,  The  College  Board,  2009,   http://apcentral.collegeboard.com/apc/public/repository/ap09_comp_sci_a_q a.pdf,  retrieved  1  June  2015.   [22]  Paul,  J.,  Student  performance  Q&A:  2010  AP®  Computer  Science  A  free-­‐ response  questions,  AP  Central,  The  College  Board,  2010,   http://apcentral.collegeboard.com/apc/public/repository/ap10_comp_scienc e_qa.pdf,  retrieved  1  June  2015.   [23]  Paul,  J.,  Student  performance  Q&A:  2011  AP®  Computer  Science  A  free-­‐ response  questions,  AP  Central,  The  College  Board,  2011,   http://apcentral.collegeboard.com/apc/public/repository/ap11_comp_sci_qa. pdf,  retrieved  1  June  2015.   [24]  Paul,  J.,  Student  performance  Q&A:  2012  AP®  Computer  Science  A  free-­‐ response  questions,  AP  Central,  The  College  Board,  2012,     http://media.collegeboard.com/digitalServices/pdf/ap/ap12_computer_scien ce_qa.pdf,  retrieved  1  June  2015.   [25]  Porter,  L.,  Guzdial,  M.,  McDowell,  C.,  Simon,  B.,  Success  in  introductory   programming:  What  works?,  Communications  of  the  ACM,  56  (8),  34-­‐36,  2013.   [26]  Robins,  A.,  Rountree,  J.,  Rountree,  N.,  Learning  and  teaching  programming:  A   review  and  discussion,  Computer  Science  Education,  13  (2),  137-­‐172,  2003.   [27]  Shuhidan,  S.,  Hamilton,  M.,  D’Souza,  D.,  Understanding  novice  programmer   difficulties  via  guided  learning,  ITiCSE  ’11  Proceedings  of  the  16th  annual  joint   conference  on  Innovation  and  Technology  in  Computer  Science  Education,  213-­‐ 217,  2011.   [28]  Smith,  P.  A.,  Webb,  G.  I.,  Reinforcing  a  generic  computer  model  for  novice   programmers,  Proceedings  of  the  ASCILITE  Seventh  Australian  Society  for   Computers  in  Learning  in  Tertiary  Education  Conference,  1995.   [29]  Soloway,  E.,  Spoher,  J.  C.  (Eds.),  Studying  the  Novice  Programmer,  Hillsdale,  NJ:   Lawrence  Erlbaum,  1989.   [30]  Utting,  I.,  Tew,  A.  E.,  McCracken,  M.,  Thomas,  L.,  Bouvier,  D.,  Frye,  R.,  Paterson,  J.   Caspersen,  M.,  Kolikant,  Y.  B.-­‐D.,  Sorva,  J.,  Wilusz,  T.,  A  fresh  look  at  novice   programmers’  performance  and  their  teachers’  expectations.  ITiCSE-­‐WGR’13   Proceedings  of  the  ITiCSE  work  group  reports  conference  on  innovation  and   technology  in  computer  science  education,  15–43,  2012.   [31]  The  College  Board,  AP  Computer  Science  A:  Course  overview,  AP  Students,  2015,   https://apstudent.collegeboard.org/apcourse/ap-­‐computer-­‐science-­‐a,   retrieved  1  June  2015.   [32]  The  College  Board,  AP  Computer  Science  A:  Course  details,  AP  Students,  2015,   https://apstudent.collegeboard.org/apcourse/ap-­‐computer-­‐science-­‐a/course-­‐ details,  retrieved  1  June  2015.   [33]  The  College  Board,  AP  Computer  Science  A  Course  overview,  AP  Students,  2015,   http://media.collegeboard.com/digitalServices/pdf/ap/ap-­‐course-­‐ overviews/ap-­‐computer-­‐science-­‐a-­‐course-­‐overview.pdf,   retrieved   1   June   2015.   [34]  The  College  Board,  AP  Computer  Science  A:  About  the  exam,  AP  Students,  2015,   https://apstudent.collegeboard.org/apcourse/ap-­‐computer-­‐science-­‐a/about-­‐ the-­‐exam,  retrieved  1  June  2015.  

[35]  The  College  Board,  The  AP  Computer  Science  A  Exam,  AP  Central,   http://apcentral.collegeboard.com/apc/public/exam/exam_information/2000 .html,  retrieved  1  June  2015.   [36]  The  College  Board,  Explore  AP,  2015,   https://apstudent.collegeboard.org/exploreap,  retrieved  1  June  2015.   [37]  The  College  Board,  Computer  Science  A  Course  Description  (Effective  Fall  2014),   2014,  http://media.collegeboard.com/digitalServices/pdf/ap/ap-­‐computer-­‐ science-­‐a-­‐course-­‐description-­‐2014.pdf,  retrieved  1  June  2015.   [38]  AP®  Computer  Science  A  2009  free-­‐response  questions,  The  College  Board,   2009,   http://apcentral.collegeboard.com/apc/public/repository/ap09_frq_computer _science_a.pdf,  retrieved  1  June  2015.   [39]  AP®  Computer  Science  A  2010  free-­‐response  questions,  The  College  Board,   2010,   http://apcentral.collegeboard.com/apc/public/repository/ap10_frq_computer _science_a.pdf,  retrieved  1  June  2015.   [40]  AP®  Computer  Science  A  2011  free-­‐response  questions,  The  College  Board,   2011,   http://apcentral.collegeboard.com/apc/public/repository/ap11_frq_comp_sci _a.pdf,  retrieved  1  June  2015.   [41]  AP®  Computer  Science  A  2012  free-­‐response  questions,  The  College  Board,   2012,   http://apcentral.collegeboard.com/apc/public/repository/ap_frq_computersc ience_12.pdf,  retrieved  1  June  2015.   [42]  AP®  Computer  Science  A  2009  scoring  guidelines,  The  College  Board,  2009,   http://apcentral.collegeboard.com/apc/public/repository/ap09_computer_sci ence_a_sgs.pdf,  retrieved  1  June  2015.   [43]  AP®  Computer  Science  A  2010  scoring  guidelines,  The  College  Board,  2010,   http://apcentral.collegeboard.com/apc/public/repository/ap10_comp_sci_a_s coring_guidelines.pdf,  retrieved  1  June  2015.   [44]  AP®  Computer  Science  A  2011  scoring  guidelines,  The  College  Board,  2011,   http://apcentral.collegeboard.com/apc/public/repository/ap11_comp_sci_a_s coring_guidelines.pdf,  retrieved  1  June  2015.   [45]  AP®  Computer  Science  A  2012  scoring  guidelines,  The  College  Board,  2012,   http://apcentral.collegeboard.com/apc/public/repository/ap12_computer_sci ence_a_scoring_guidelines.pdf,  retrieved  1  June  2015.   [46]  AP®  Computer  Science  A  2009  free-­‐response  questions  scoring  statistics,  The   College  Board,  2009,   http://apcentral.collegeboard.com/apc/public/repository/ap09_computer_sci ence_a_scoring_statistics.pdf,  retrieved  1  June  2015.    [47]  AP®  Computer  Science  A  2010  free-­‐response  questions  scoring  statistics,  The   College  Board,  2010,   http://apcentral.collegeboard.com/apc/public/repository/ap10_computer_sci ence_a_scoring_statistics.pdf,  retrieved  1  June  2015.  

 [48]  AP®  Computer  Science  A  2011  free-­‐response  questions  scoring  statistics,  The   College  Board,  2011,   http://apcentral.collegeboard.com/apc/public/repository/ap11_comp_sci_sco ring_statistics.pdf,  retrieved  1  June  2015.    [49]  AP®  Computer  Science  A  2012  free-­‐response  questions  scoring  statistics,  The   College  Board,  2012,   http://apcentral.collegeboard.com/apc/public/repository/ap12_comp_sci_A_s coring_statistics.pdf,  retrieved  1  June  2015.  

Suggest Documents