Skip to content

Research Papers Of Assesment Websites

Editorial Board

Lawrence M. Rudner, 
Independent Consultant, co-editor and co-founder

Donald Sharpe,
University of Regina, co-editor

Senior Members

Alan Huebner,
University of Notre Dame

Craig Mertler,
Arizona State University

Jason W. Osborne,
Clemson University

William D. Schafer, 
Univ of Maryland, Emeritus co-Editor and
co-founder

Nathan Thompson,
Assessment Systems Corp.

Bruno D. Zumbo,
Univ of British Columbia

Reviewers

Avi Allalouf,
National Institute for Testing and Evaluation (NITE), Israel

A. Alexander Beaujean,
Baylor University

Ruth Childs,
University of Toronto

Helenrose Fives,
Montclair State University

Kurt F. Geisinger,
Univ Nebraska - Lincoln

Gene V Glass,
Arizona State Univ

Heather D. Harris,
Inteleos

Jeanne Horst,
James Madison University

Gunilla Näsström,
Umeå University, Sweden

Jonathan Rubright,
Nat Board of Med Examiners

Michael Russell,
Boston College

Shayna Rusticus,
Kwantlen Polytechnic University

Matt Williams,
Massey University

Joost de Winter,
Delft University of Technology

Amanda Wolkowitz,
Alpine Testing

Welcome to the PARE homepage

Practical Assessment, Research & Evaluation (PARE) is an on-line journal providing access to refereed articles that can have a positive impact on assessment, research, evaluation, and teaching practice.

The production of PARE is made possible through the generous support of our sponsors. Please click on the images, visit their websites, and take a moment to read descriptions of these important, well-known leaders in the fields of measurement and education.

Manuscripts published in Practical Assessment, Research & Evaluation are scholarly syntheses of research and ideas about methodological issues and practices. They are designed to help members of the community keep up-to-date with effective methods, trends, and research developments from a variety of settings. More information about PARE is in our Factsheet.

Manuscripts to be considered for Practical Assessment, Research & Evaluation should be short, 2000-8000 words or about eight pages in length, exclusive of tables and references, and have clear generalizable implications for practice in education, certification, or licensure. They should conform to the stylistic conventions of the American Psychological Association (APA). Submissions should be single spaced with tables and graphs in the body of the submission, not at the end. See the Review and Policies sections of this web site for technical specifications and a list of suggested topics. Manuscripts should be submitted electronically to editor@pareonline.net.

Permission is granted to distribute any article in this journal for nonprofit, educational purposes if it is copied in its entirety and the journal is credited. Please notify the editor and the author if an article is to be used.

Practical Assessment, Research & Evaluation is listed among the journals in the Scholarly Publishing and Academic Resources Coalition (SPARC), the Directory of Open Access Journals (DOAJ), the Directory of Open Access Scholarly Journals in Education, and Cabell's Directories. Manuscripts that appear in PARE are indexed by ERIC, Elsevier's SciVerse Scopus, Scimago, and EBSCO.
 

Unique Vistors Since 28 June 2011

  
  • 1.

    Huizingh, E.: The content and design of web sites: an empirical study. Information & Management 37, 123–134 (2000)CrossRefGoogle Scholar

  • 2.

    Luo, J., Ba, S., Zhang, H.: The effectiveness of online shopping characteristics and well-designed websites on satisfaction. MIS Quarterly 36(4) (2012)Google Scholar

  • 3.

    ISO/IEC 9126-1:2001, Software engineering – Product quality – Part1: Quality model, http://www.iso.org

  • 4.

    ISO/IEC 25010:2011, Systems and software engineering – Systems and software quality requirements and evaluation (SQuaRE) – System and software quality models, http://www.iso.org

  • 5.

    McCall, J.A., Richards, P.K., Walters, G.F.: Factors in Software Quality. National Technology Information Service 1-3 (1977)Google Scholar

  • 6.

    Boehm, B.: Characteristics of software quality. TRW series on software technology, vol. 1. North-Holland, Amsterdam (1978)Google Scholar

  • 7.

    Kappel, G., Michlmayr, E., Pröll, B., Reich, S., Retschitzegger, W.: Web Engineering - Old Wine in New Bottles? In: Koch, N., Fraternali, P., Wirsing, M. (eds.) ICWE 2004. LNCS, vol. 3140, pp. 6–12. Springer, Heidelberg (2004)CrossRefGoogle Scholar

  • 8.

    Signore, O.: A Comprehensive Model for Web Sites Quality. In: Seventh IEEE International Symposium on Web Site Evolution (WSE 2005), pp. 30–36 (2005)Google Scholar

  • 9.

    Ruiz, J., Calero, C., Piattini, M.: A Three Dimensional Web Quality Model. In: Cueva Lovelle, J.M., Rodríguez, B.M.G., Gayo, J.E.L., Ruiz, M.d.P.P., Aguilar, L.J. (eds.) ICWE 2003. LNCS, vol. 2722, pp. 384–385. Springer, Heidelberg (2003)CrossRefGoogle Scholar

  • 10.

    Mich, L., Franch, M., Gaio, L.: Evaluating and Designing the Quality of Web Sites. IEEE Multimedia, 34–43 (2003)Google Scholar

  • 11.

    Olsina, L., Lafuente, G., Rossi, G.: Specifying Quality Characteristics and Attributes for Websites. In: Murugesan, S., Desphande, Y. (eds.) Web Engineering. LNCS, vol. 2016, pp. 266–277. Springer, Heidelberg (2001)CrossRefGoogle Scholar

  • 12.

    Olsina, L., Rossi, G.: Measuring Web Application Quality with WebQEM. IEEE Multimedia, 20–29 (2002)Google Scholar

  • 13.

    Cimino, S., Micali, F.: Web Q-Model: a new approach to the quality. In: International Conference on Computer Human Interaction, CHI 2008. ACM (2008)Google Scholar

  • 14.

    Mavromoustakos, S., Andreou, A.S.: WAQE: a Web Application Quality Evaluation model. International Journal of Web Engineering and Technology 3, 96–120 (2007)CrossRefGoogle Scholar

  • 15.

    Di Blas, N., Guermand Emilia Romagna M. P., Orsini C., Paolini P.: Evaluating The Features Of Museum Websites (The Bologna Report). Museums and the Web (2002)Google Scholar

  • 16.

    Elling, S.K.: Evaluating website quality: Five studies on user-focused evaluation methods. LOT Dissertations Series, vol. 308 (2012)Google Scholar

  • 17.

    Dominic, P.D.D., Jati, H.: A comparison of Asian airlines websites quality: using a non-parametric test. International Journal of Business Innovation and Research 5(5), 599–623 (2011)CrossRefGoogle Scholar

  • 18.

    Rekik, R., Kallel, I.: Fuzzy reduced method for evaluating the quality of institutional web sites. In: 7th International Conference on Next Generation Web Services Practices (NWeSP), pp. 296–301 (2011)Google Scholar

  • 19.

    Malak, G., Sahraoui, H.A., Badri, L., Badri, M.: Modeling web quality using a probabilistic approach: An empirical validation. ACM Transactions on the Web (TWEB) 4(3), 1–31 (2010)CrossRefGoogle Scholar

  • 20.

    Fernandes, N., Costa, D., Duarte, C., Carriço, L.: Evaluating the accessibility of web applications. Procedia Computer Science (14), 28–35 (2012)Google Scholar

  • 21.

    Ivory, M.Y.: An empirical foundation for automated web interface evaluation. PhD thesis, University of California at Berkeley (2001)Google Scholar

  • 22.

    Saba, H., de Freitas Jorge, E.M., Franco Costa, V., Borges de Barros Pereira, H.: WEBTESTE: A Stress Test Tool, WEBIST 2006, Proceedings of the Second International Conference on Web Information Systems and Technologies: Internet Technology Web Interface and Applications, Setúbal, Portugal, pp. 246–249 (2006)Google Scholar

  • 23.

    Guillemot, M., König, D.: Web Testing Made Easy. In: OOPSLA 2006, Portland, Oregon, USA, October 22-26, pp. 692–693 (2006)Google Scholar

  • 24.

    http://www.tawdis.net/

  • 25.

    http://webaim.org/

  • 26.

    http://www.powermapper.com/

  • 27.

    http://www.linkpopularity.com/

  • 28.

    http://www.web.qualt.co.uk

  • 29.

    Rio, A., Brito e Abreu, F.: Websites quality: Does it depend on the application domain? In: uality of Information and Communications Technology, 7th Int. Conf. on the Quality of Information and Communications Technology, QUATIC 2010, pp. 493–498 (2010)Google Scholar

  • 30.

    Stefani, A., Xenos, M.: Meta-metric Evaluation of E-Commerce-related Metrics. Electronic Notes in Theoretical Computer Science 233, 59–72 (2009)CrossRefGoogle Scholar

  • 31.

    Chae, H.S., Kim, T.Y., Jung, W., Lee, J.: Using metrics for estimating maintainability of web applications: An empirical study. In: 6th Annual IEEE/ACIS International Conference on Computer and Information Science (ICIS 2007), Melbourne, Australia, July 11-13, pp. 1053–1059 (2007)Google Scholar

  • 32.

    de Silva, A., Ponti de Mattos Fortes, R.: Web quality metrics: an analysis using machine learning systems. In: International Conference on Information Systems, Analysis and Synthesis, World Multiconference on Systemics, Cybernetics and Informatics, Information Systems Technology, SCI 2001/ISAS 2001, vol. XI (2001)Google Scholar

  • 33.

    Bajaj, A., Krishnan, R.: CMU-WEB: a conceptual model for designing usable web applications. Journal of Database Management 10(4), 33–43 (1999)CrossRefGoogle Scholar

  • 34.

    Ghosheh, E., Black, S., Qaddour, J.: Design metrics for web application maintainability measurement. In: The 6th ACS/IEEE International Conference on Computer Systems and Applications, AICCSA 2008, Doha, Qatar, March 31- April 4, pp. 778–784 (2008)Google Scholar

  • 35.

    Charland, F., Badri, L., Malak, G.: WEBQUALITY: Towards a Tool Supporting the Assessment of Web-Based Applications Quality. In: CAINE 2007, pp. 115–121 (2007)Google Scholar

  • 36.

    Mendes, E., Mosley, N., Counsell, S.: Estimating design and authoring effort. IEEE MultiMedia, Special Issue, pp. 50-7 (January-March 2001)Google Scholar

  • 37.

    Singh, Y., Malhotra, R., Gupta, P.: Empirical Validation of Web Metrics for Improving the Quality of Web Page. Int. Jal. Advanced Computer Science and Applications 2(5) (2011)Google Scholar

  • 38.

    Di Lucca, G.A., Fasolino, A.R., Tramontata, P., Visaggio, C.A.: Towards the Definition of a Maintainability Model for Web Applications. In: 8th European Conference on Software Maintenance and Re-engineering, pp. 279–287 (2004)Google Scholar

  • 39.

    Sommerville, I.: Software Engineering, 5th edn. Addison Wesley Longman Publishing Co., Inc., Redwood City (1995)Google Scholar

  • 40.

    Rosenberg, L., Hammer, T., Shaw, J.: Software Metrics and Reliability. In: Proceedings of IEEE International Symposium on Software Reliability Engineering (1998)Google Scholar

  • 41.

    Calero, C., Ruiz, J., Piattini, M.: Classifying web metrics using the web quality model. Online Information Review 29(3), 227–248 (2005)CrossRefGoogle Scholar

  • 42.

    Fernandez, A., Insfran, E., Abrahao, S.: Usability evaluation methods for the web: A systematic mapping study. Information and Software Technology 53, 789–817 (2011)CrossRefGoogle Scholar