Research Article | Open Access

Examining Ease and Challenges in Tele-Assessment of Children Using Slosson Intelligence Test

    Rabia Jaffar

    Institute of Clinical Psychology, University of Karachi, Pakistan

    Amena Zehra Ali

    Institute of Clinical Psychology, University of Karachi, Pakistan


Received
15 Jul, 2019
Accepted
19 Apr, 2020
Published
31 Dec, 2021

As the world came to terms with the longevity of the COVID-19 crisis, there came a mass migration towards tele-health services which included tele-assessments. Practitioners argued that delaying assessments would mean a delay in provision of services. Therefore wherever possible, assessment procedures were modified to cater to an online setting including assessment of cognitive abilities. With its many advantages tele-assessment brings many unpredictable challenges. In this study we tried to explore those by administering the Slosson Intelligence Test-third edition (Slosson, 2006) on a sample of 29 school going children ranging in age from 6 to 16 years old, via Zoom. Observations were divided into two categories, that is, logistical and practical. Results showed that technology improves accessibility of services and solves many logistical problems such as availability of testing venues, and makes communication easier. However, practicality was hindered as the testing environment was less controlled and factors such as internet disruptions, limitations in observations, and presence of other people and things in the household may adversely affect the scores. Moreover, virtual fatigue could be a factor that practitioners need to consider.

A major factor in slowing the spread of the Corona virus is to avoid any and all physical interactions with other humans that are beyond necessary (World Health Organization, 2020). For this reason, at the onset of the contagion, in-person health services apart from those saving lives were suspended around the world. With no end in sight, practitioners and patients sought to avail tele-health or tele-medicine services.

Tele-health services are not new, but they have never been as widely accepted and utilized as during the past year (Holtz, 2021). While provision of psychotherapy and counseling services through telecommunication services was fairly common, conducting assessments through these channels is a fairly recent trend (Brearly et al., 2017). Even with that, it was more commonly used with adults and geriatrics and not as substantially with children (Farmer et al., 2020).

Proponents of tele-assessment and tele-psychology argue that reach of psychological services can be expanded through means of telecommunication and we should work to establish standardized procedures and psychometric properties of instruments for this mode of care and assessment (Farmer et al., 2020). However, most tools are normed and standardized for in-person assessments and their equivalence for remote administration needs to be established. Amarendran et al. (2011) compared ratings of assessors on a neuropsychiatric tool, via face to face and remote observation through telecommunication and found no significant difference. A meta-analysis conducted by Brearly et al. (2017) of 12 studies comparing scores acquired by adults on neuro-cognitive tests via video conference administration compared to those acquired during on-site administration, found that orally mediated tasks including digit span, verbal fluency, and list learning were not affected by videoconference administration. Heesacker et al. (2020) reviewed studies on computer assisted psychological assessment and psychotherapy for college students and found it to be effective across samples and cultures.

Developing practices for delivering services remotely can help us overcome many obstacles and encourage more people to seek help. These include reduced travel time and transportation costs, availability of more assessors at the same time, being able to schedule back to back appointments thus saving time, having the opportunity to observe patients in their home environment, increasing reach of services in rural or hard to reach areas, overall cost reduction, and lastly avoiding stigmatization which causes many to avoid seeking help (Farmer et al., 2020; Heesacker et al., 2020; Perrin et al., 2020).

Literature is still scarce especially related to cognitive assessment of children. Wright (2018a) found equivalent results in face to face and online administration of Reynolds Intellectual Assessment Scales, Woodcock-Johnson Tests of Cognitive and Achievement, Fourth Edition (Wright, 2018b) and Wechsler Intelligence Scale for Children, (Wright, 2020). They concluded that available norms for these tests could be utilized for online administration as well. These studies however, were conducted using the online testing platform Presence Learning. Moreover, on-site proctors were there to facilitate administration of performance tasks. This is not feasible with the social distancing protocols of COVID-19 (Farmer et al., 2021). Hence, it has been recommended to substitute such subtests and prorate composite scores (Pearson, 2020). Equivalence of no other tests in this regard has been established.

As the world scrambled to deal with the spread of corona virus, it was initially deemed suitable to delay psychological assessments, till in person services could be resumed. Delaying assessment meant that provision of services would also be delayed. School psychologists, among others feared that delaying assessments for long, and the backlog it will create will increase the pressure on them and negatively impact students (Stifel et al., 2020). So, when it became evident that COVID-19 and social distancing would be a part of our lives for the foreseeable future, national associations, test publishers and researchers supplied guidelines for using adapted tele-assessment in cases where testing was inevitable (American Psychological Association, 2020; Canadian Psychological Association, 2020; Hass & Leung, 2021; Krach et al., 2020; Pearson, 2020; Stifel et al., 2020).

Adapted tele-assessment is defined as: using telecommunication technology in the administration of tests that were originally designed and standardized for face-to face testing (Krach et al., 2020). While the guidelines from different parties differed in various aspects, the common recommendation was practice caution and report that the administration had differed from standard practices and available norms might not apply. Caution may refer to choosing tests and practices, keeping in mind how the outcome of these results may affect the test taker and the services they may require, also, that the validity and reliability of scores might be compromised due to deviation from standard procedure (Krach et al., 2020). Pearson (2020) encouraged using online testing platforms such as Q global and presence learning, and also conducted a webinar to train practitioners. Moreover, these platforms are not widely accessible especially in developing economies where the effects of COVID-19 are causing financial constraints as it is.

Many practitioners were using tele-assessment procedures as an alternate for or adjunct to traditional assessment procedures before 2020 as well; however, these numbers have surged rapidly since the pandemic with social distancing protocols (Wosik et al., 2020). Large number of practitioners spontaneously moving towards tele-assessment for the first time, meant lack of adequate training (Farmer et al., 2020) and familiarity with procedures (Krach et al., 2020). Plus using traditional instruments for remote assessment is itself a new avenue with its own set of issues. Even with many guidelines available, it is impossible to predict all possible issues that may arise in different cultures.

An overview of the available literature points to the following issues that may negatively impact or hinder tele-assessments, low internet speed and disruptions, dropped calls, inconsistent audio, difficulty adjusting camera, unavailability of adequate hardware, lack of familiarity with video-conferencing softwares on both the examiner and test taker’s end, privacy issues and distractions due to less space in homes especially with large families, availability of toys and other sources of entertainment around-the device that is being used for testing may itself be a source of distraction, lack of control over environment such as noises from the neighbors or the street may also cause distractions. Moreover, the limited frame of the camera causes limitations to what the examiners can observe (Brearly et al., 2017; Wagner et al., 2020). Other challenges include difficulties in providing explanations and obtaining informed consent which would take longer than in physical settings, people’s tendency to adopt a casual attitude and take the practitioner’s time for granted-thus causing problems in keeping to a schedule, and parents inclination to help the child taking the test (Hass & Leung, 2021; Perrin et al., 2020). As mentioned, this list is not exhaustive and more culturally diverse researches are needed to understand the full range of issues we need to counter in different populations.

Given the longevity of the contagion, provision of tele-health services, including tele-assessment seems inevitable. This also gives us an opportunity to develop sustainable models for provision of services through this medium. For that, it is important to understand different factors involved in different cultures. While there seem to be many benefits, there are also many factors that an examiner needs to be cognizant of when making a decision to conduct an online assessment. The present study aims to examine and report on these issues as they occur when the SIT-R3 is administered on school going children through Zoom.

METHOD

This study is part of the doctoral thesis of the author titled Adaptation and Standardization of the Slosson Intelligence Test-Revised third edition (SIT-R3; Slosson, 2006) according to the Pakistani population. During the data collection for standardization phase of the study, the first cases of corona virus emerged in the city and it went under lockdown in March, 2020. Thus the in-person data collection process could not be continued. Literature, though sparse, shows encouraging results for equivalence of scores on verbal tests in face to face and remote settings. Therefore, after reviewing guidelines it was decided to continue data collection through Zoom the video conferencing platform being most commonly used for remote teaching and other purposes. The test was first administered on a limited sample in February and March, 2020. The examiner’s observations were analyzed, to report on the challenges experienced versus the ease and benefits. This paper records that entire process. Methodology was designed in a way to mitigate effects of some of the problems pointed out in previous studies and controls were placed on the sample. The recruitment process and procedures, especially those that differed from standard procedures are described in detail in this section.

Participants
Twenty nine participants (14 males, 15 females), 6-16 years old were recruited through purposive sampling from Karachi, Pakistan. The sample was divided into the following age clusters:


Participants were selected to fit the following inclusion criteria: attending formal school, familiar/comfortable with Zoom, have access to high speed internet, and have access to appropriate hardware such as smart phone, tablet, or computer. Approximately 17% of the participants approached refused to participate with the most common reason being Zoom fatigue (Bailenson, 2021). The recruitment process involved (audio) calling the parents on their phone or sending a text message to explain the purpose of the study. A pdf file of the consent form was then emailed or sent to them through text messaging apps. Following recommendation by Hass and Leung (2021) to simplify things and save time, explanations were provided and parents were encouraged to clear any queries they may have regarding the purpose or procedure at this stage. Information about availability of appropriate devices and child’s familiarity with Zoom was also obtained through one of these mediums. The parents were informed that since the test is still in the standardization phase, it will not be possible to generate an IQ score. However, if they require feedback about child strengths and weakness based on their performance can be shared. Moreover, the child’s results would be kept anonymous and confidential and they had the right to withdraw from the study at any point. Verbal consent from the child was obtained before beginning of the test by the examiner. They were explained the purpose of the study and informed about their right to withdraw.

Instrument
Slosson Intelligence Test Revised-third edition (SIT-R3; Slosson, 2006), is an individually administered, verbal test of intelligence for individuals 4 to 64 years old. It is designed for use in situations, where a quick estimate of general verbal cognitive ability is needed, such as screening exceptional children in schools and colleges for placement or for tentative diagnosis in clinical settings. It can be used by a vast number of professionals including teachers, counselors and special education teachers, as it does not require a high level of training. Moreover, its administration and scoring occurs simultaneously, thus enabling the test to be completed in a brief time in 10-15 minutes for an average person (Slosson, 2006). Almost all items require verbal response only and no manipulation of any kind of material. Moreover, the questions are brief and there is no time limit for any item. This makes SIT-R3 an ideal instrument for assessing cognitive abilities via an online platform.

Examiners
The tests were administered by three examiners. One of them is a Clinical Psychology Doctoral fellow, who has completed required coursework in assessment and interpretation. Two others are research assistants, who have completed masters in Psychology, with focus in the domain of Clinical Psychology. They have also been working with children in their respective places of work. The research assistants underwent training to learn the administration and scoring of SIT-R3 according to standard procedures. They practiced conducting it via Zoom and observed a testing session prior to independent administration.

Procedure
Once verbal (on audio phone call) or written (through text message) consent was received from the parents, a mutually convenient time for the child and the examiner was determined. Parents were sent a link for the Zoom meeting and demographic information was obtained. They were also informed that the test would be administered individually (in case of siblings), and the child would require a distraction free environment. In case of older participants (13 year and above), time was set up directly with them through their personal phone or e-mail, after consent was obtained from the parents.

A point of consideration here is shared devices. If there is one device that the family shares for their individual video conferencing, timings had to coordinate accordingly. For example, one participant’s mother had a yoga class (online) around the time the child was free for assessment, so the assessment was scheduled in a way that the child would finish the assessment just in time for her to join her class, as that was the device the family shared. This was easily coordinated beforehand. However, due to internet disruptions the session took longer than expected, causing pressure on the examiner to rush. Hence, from then onwards it was communicated to parents to keep a buffer before scheduling another activity on the device or for the participant.

For younger participants, parents were instructed to keep a pencil, a paper, and a book nearby for administration of items where they are required to follow instructions such as: “put the paper inside the book” (item 12). At the onset of the meeting, the participants were instructed to keep their microphones and videos on throughout, and to position in a way that their face was visible to the examiner. They were asked to inform the examiner immediately in case there was any lag or disruption in communication during the session. Next it was checked with the child if they were alone in the room; if not, the other person was requested to leave after explaining how this might affect the child’s performance. One exception here was the mother of a 6 year old participant who was very shy; the mother was allowed to stay but instructed to only encourage them and do not help the child with any answers.

Once the participant was settled, standard instructions pertaining to test administration were given and the items were administered according to instructions given in the manual. Administration of a few items had to be tailored to go with online administration. For items 5, 7, 17, and 23, the examiner is required to draw apples on the back of the scoring sheet and ask the child to count them. In online administration, the screen sharing feature in the software was utilized and the apples were drawn on the white board while the child was asked to count. Secondly, for item 14, the child is shown a picture of two squares of different sizes (on the back of the scoring sheet) and they are asked to identify the bigger and smaller one. For online version, a picture of a big and small square (taken off the internet) was shown to the child by screen sharing. On average, it took approximately 30 minutes to administer the test on one participant, with it going as high as 60 minutes for some participants due to internet and other disruptions. While some finished in 15 minutes with the child is being attentive and active, and the internet running smoothly. Some participants, despite having enough space in the house had trouble finding a private spot due to siblings and other family members walking in and out.

Examiners recorded their observations in the form of written or voice notes immediately after test administration. Some observations were recorded during the session if there was a pause or disruption. The examiners notes from the testing sessions, and text communication between the parents and examiners-regarding scheduling the sessions and explaining the objectives, were then analyzed.

Classification and Analysis
Content analysis was conducted by the authors. One is an Associate Professor and the other is a Doctoral fellow in clinical psychology. Initially a few categories were identified based on the literature mentioned earlier. These included: (1) difficulties in scheduling, (2) connectivity issues, (3) privacy issues, (4) limitations in oobservation, (5) difficulties in communicating objectives, (6) distractions, (7) extraneous noises, (8) help from parents, (9) availability of testing venue, and (10) efficiency.

Thematic analysis of the data was conducted to determine how these categories would be defined culturally, and to examine any new themes/categories that may emerge. Notes were read/played out aloud and simultaneously discussed. It was agreed to add two more categories, that is, (a) disregard of instructions, and (b) ease in scheduling. Moreover, the category ‘help from parents’ was modified to ‘extrinsic help’ to incorporate more variables. The categories were then sorted into two groups, that is, logistical pertaining to the planning and organization that leads to the administration; and practical which refers to test administration and the testing environment.

The final categories and their definitions, as agreed upon by the raters are as follows:

Logistical Issues

  1. Difficulty in communicating objectives refers to the any problems encountered by the examiner when explaining goals and procedures to the parents/participants, and obtaining informed consent through telecommunication technology.
  2. Ease in scheduling refers to parents willingness to schedule sessions sooner (like the same day or very next day when first contacted); and when needed, the participants and examiners willingness to reschedule.
  3. Difficulty in scheduling categorizes instances where in the participant/parent seemed to take the examiners’ time for granted, and would cancel appointments at the last minute or show up late.
  4. Availability of testing venues refers to the examiners’ experience of not having to worry about availability of testing rooms/venues at the required time.
  5. Efficiency refers to ability of examiners as well as participants to schedule back to back sessions or other activities.

Practical Issues

  1. Connectivity issues refers to glitches or lag in the stream that significantly disrupted administration of items; internet disconnected completely and the meeting had to be restarted, from either the examiner or participants’ side. Other instances may include getting a call on the device from another application which causes the stream to pause.
  2. Privacy issues refer to unavailability of private space, where the participant is able to take the test without others being present.
  3. Distractions refer to any person or thing disturbing the flow of test or affecting the participants’ attention; such as: a toy, or a sibling, or parent coming in to check how they were doing, or if they were done. E-mail, messages or other notifications appearing on the screen.
  4. Extraneous noises refer to noises that are beyond our control such as traffic sounds, pets or the Azaan (call to prayer from the mosques).
  5. Extrinsic help refer to help from parents or other sources, such as, a paper and pencil for items of quantitative (mental math). The participant could be furtively getting help from different search engines or applications in the device being used, such as: calculator, dictionary, or Google. This category also includes books or other things in the room that may inadvertently provide clues to answers. These were difficult to observe directly due to obvious limitations; hence, they were recorded based on the participants’ body language or comments.
  6. Disregard of instructions categorizes instances where the participants neglected to follow instructions such as keeping the video on or staying seated in front of the camera. For example, one participant lied down and another walked away for a minute or so. This category also includes the participant constantly moving the device, which causes the frame of the camera to shift constantly; causing difficulties and discomfort in observation for the examiner.
  7. Limitations in observation refers to the occasions where the examiner couldn’t observe the participants response to items where they were required to follow instructions such as “put the paper inside the book” (item 12), or “show me your heel” (item 26 b).

The notes were read out loud for the second time and categorical data analysis was conducted. All categories were scored based on unanimous agreement amongst the raters and the findings were tabulated. The frequencies were determined in a way that if a certain experience with a participant fell in one category, it was marked once, irrespective of how many times it occurred. For example, if a participant was distracted by a sibling during the session, it was marked one in the category Distraction; even if the sibling came and distracted more than once. Similarly, if a parent changed / cancelled appointments at the last minute more than once, it was still marked as one for that participant in that category (Difficulty in Scheduling). Hence, the frequencies and percentages inform us about the number of participants that had an experience included in a particular category (or the examiner experienced that instance with a particular participant), and not the number of times they had that experience.

RESULTS

Analysis dictates that a better presentation of results would be to divide the categories into logistical and practical aspects of tele-assessment rather than comparing the observations in terms of ease/benefits and challenges or difficulties, as was initially decided.

Table 1:
Displays Frequency Counts for Logistical Categories in
Descending Order of Frequency

Frequencies inform us about the number of participants that had an experience included in a particular category; or the examiner experienced that instance with a particular participant; and not the number of times it happened.

Table 2:
Displays Frequency Counts for Practical Categories in
Descending Order of Frequency

Frequencies inform us about the number of participants that had an experience included in a particular category; or the examiner experienced that instance with a particular participant; and not the number of times it happened.

Table 3:
Displays Age Based Frequency Counts for Practical
Categories

DISCUSSION

Previously, the most common obstacle to effective tele-practice was the lack of familiarity with required technology. With most of the world living their lives virtually for over a year now, this point is moot. However, as we are still adapting and adjusting, virtual fatigue or zoom fatigue is a factor we may need to consider (Bailenson, 2021). This was seen when 17% of the participants approached for the study refused to participate quoting this as the main reason.

Those who have been advocating increasing reach and feasibility of services through tele-practice mostly focus on the logistical aspects, such as availability of more space and more assessors at the same time and reduced travel time which encourages more people to engage. The results in Table 1 are a testimony to that. Clarifying objectives and obtaining informed consent through video conferencing was stated as problematic. However, when it was done before the session through audio phone calls, text messages or e-mail, it proved to be efficient.

Parents seemed content with the arrangement of doing it via Zoom as they did not have to deal with commuting, and coordinating different schedules in case of siblings. Each sibling could do it at a day or time that suits them. Some participants chose to do it from places other than their homes, such as from a friend’s or grandparents’ house. Availability of devices at the required time could be a potential issue with siblings sharing them; however it was easily managed when coordinating appointments. Similarly, rescheduling or changing appointments when necessary (such as when one examiner’s internet connection was cut off for an entire day) was easier. However, the downside to this was that it led to almost one-fourth (Table 1) of the parents and participants to adopt a lax attitude and show up late or change appointments at the last moment causing difficulties for the examiner. Moreover, it may have also led some parents to over schedule activities for the children, causing fatigue.

Moving on to the practical aspects included internet disruptions and connectivity issues were expected, however in almost half of the cases there were no connectivity issues that significantly disrupted the testing process. Of the other half that was disrupted, a few disconnected completely and the meeting was started again promptly. One anomaly here was when one participant could not join for at least an hour as their internet was not working. In the remaining cases though, significant glitches or lags in audio and video caused challenges; in those cases mostly the examiner had to repeat the question or ask the participant to repeat the answer. This left the examiners questioning the integrity of the scores, especially those for auditory memory (the test taker has to listen to and repeat a set of numbers or a sentence as it is).

The environment plays a vital role in assessments and is carefully controlled to avoid confounding the scores. Even though most participants had no trouble finding a private space to take the test, many had distractions such as parents or siblings walking in. Most of these were managed easily by requesting them to leave the room; however it disrupted the flow of the process. Moreover, there may be other distractions such as toys or other objects around. If the participant is at a friend’s or relative’s house, they may be distracted by what they are missing out on. Most importantly, there may be items in the space which inadvertently give the test taker clues to answers, which affects the validity of the scores. For example, when asked “Who is the author of Adventures of Sherlock Holmes?” one participant inquired if she could just check the last name of the book that was lying close by. Test takers may also take advantage of applications such as dictionary, calculator and search engines which are easily accessible on the devices. The examiner may be able to observe this through body language or they may miss it due to unstable connection and if they are not paying close attention. This lack of control that the examiners have on the testing environment in remote settings is not conducive to our standard testing practices. There is no way to ensure that the participant follows instructions such as staying in the camera’s frame or ensuring that they do not get help from parents or other things. Some participants would constantly move their device, causing the frame of the camera to constantly move; this caused a lot of discomfort for the examiners and hindrances in observation. Difficulties or limitations in observations were amplified in case of younger kids for items where they were required to point to something or examiner had to observe how they followed certain instructions, for example, item 26(b) “Show me your heel” or item 9(a) “Put the pencil on top of the book”. Asking them to bring the items in the camera’s frame and then demonstrate might be uncomfortable for younger kids; also it adds another set of instructions to the item which may not be appropriate for ages it is meant for.

Table 3 also makes it clear that a participant’s age was not a determining factor for testing behaviour. Some younger children (6-7 years old) seemed more comfortable with zoom than their older siblings, followed instructions and actively participated; while it was more difficult to get some older participants to follow instructions. Problems during the administration of the test, even after careful planning can spontaneously surfaced and are beyond the control of the researcher. However, if more construct relevant factors are affecting the scores than construct irrelevant factors, the scores can be considered valid (Farmer et al., 2020). With some participants we experienced no problems at all from planning to administration. On the other hand, there were cases with multiple challenges. Consider the example of the participant who couldn’t find any private space apart from their wardrobe and still had to put out their head multiple times, to ask the siblings to keep their voices low. On top of this, there were significant connectivity issues and they kept moving their phone. In this case it took almost an hour to complete the assessment and there were many factors confounding the scores.

LIMITATIONS

Tele-health services exacerbate the social inequities that exist in our societies; it was seen during this study as well, where to control for access to high speed internet and appropriate hardware and software, the sample had to be restricted to the middle and upper socioeconomic group. Moreover, the study was conducted during the cooler months and no power outages were experienced in any of the cases. This will in all probability not be true for the summer months, when the power outages in a city like Karachi are more frequent and widespread. However, the frequency and management of the problem is dependent on the area of living as well as the financial realities.

IMPLICATIONS

As the demand of tele-health services has surged recently, it is important to understand the different issues that may arise during provision of these services in different cultures. This study helps practitioners in developing countries like Pakistan understand the problems that they may come across when conducting assessment of cognitive abilities of children through video-conferencing, while also pointing out the benefits. Knowing these factors will help them be better prepared for any problems and more importantly help them decide if this mode of assessment would be appropriate for an individual or not.

CONCLUSION

It is evident that when tests standardized for in person testing are used in a remote setting, the validity and reliability of the scores is compromised. However, given the extraneous circumstances we find ourselves in, guidelines suggest that when needed adapted tele- assessment may be conducted and should be mentioned clearly when reporting results. Overall, this study proved fruitful in providing insight into the factors affecting tele-assessment of children; particularly in developing countries such as us. Our observations highlight that while technology improves accessibility of services and solves a lot of logistical problems; we still have many challenges which may negatively impact the actual administration of the test (in the absence of proctors or specifically developed online platforms) as the examiner has less control over the testing environment. The time required to complete one assessment may also differ in each case, depending upon the frequency and type of disruption.

REFERENCES

  1. Amarendran, V. (2011). The reliability of tele-psychiatry for a neuropsychiatric assessment. Telemedicine and e-Health, 17(3), 223-225.
  2. American Psychological Association. (2020). Guidance on psychological tele-assessment during the COVID-19 crisis.
  3. Bailenson J. N. (2021). Nonverbal overload: A theoretical argument for the causes of zoom fatigue. Technology, Mind, and Behavior, 2(1)48-60.
  4. Brearly, T. W., Shura, R. D., Martindale, S. L., Lazowski, R. A., Luxton, D. D., Shenal, B. V., & Rowland, J. A. (2017). Neuropsychological test administration by videoconference: A systematic review and meta-analysis. Neuropsychology Review, 27(2), 174-186.
  5. Canadian Psychological Association. (2020). Providing psychological services via electronic media-interim ethical guidelines for psychologists providing psychological services via electronic media.
  6. Farmer, R. L., McGill, R. J., & Dombrowski, S. C. (2021). Conducting Psycho-educational assessments during the COVID-19 crisis: The danger of good intentions. Contemporary School Psychology, 25(1), 27-32.
  7. Hass M. R., & Leung, B. P. (2021). When you can’t R. I. O. T., R. I. O: Tele-assessment for school psychologists. Contemporary School Psychology, 25(1), 33-39.
  8. Heesacker, M., Perez, C., Quinn, M. S., & Benton, S. (2020). Computer‐assisted psychological assessment and psychotherapy for collegians. Journal of Clinical Psychology, 76(6), 952-972.
  9. Holtz B. E. (2021). Patients perceptions of telemedicine visits before and after the corona virus disease 2019 pandemic. Telemedicine and e-Health, 27(1), 107-112.
  10. Krach, S. K., Paskiewicz, T. L., & Monk, M. M. (2020a). Testing our children when the world shuts down: Analyzing recommendations for adapted tele-assessment during COVID-19. Journal of Psycho-educational Assessment, 38(8), 923-941.
  11. Krach, S. K., Paskiewicz, T. L., Ballard, S. C., Howell, J. E., & Botana, S. M. (2020b). Meeting the COVID-19 deadlines: Choosing assessments to determine eligibility. Journal of Psycho-educational Assessment, 39(1), 50-73.
  12. Pearson, K. (2020). Tele-practice and the WISC-V.
  13. Perrin, P. B., Rybarczyk, B. D., Pierce, B. S., Jones, H. A., Shaffer, C., & Islam, L. (2020). Rapid tele-psychology deployment during the COVID‐19 pandemic: A special issue commentary and lessons from primary care psychology training. Journal of Clinical Psychology, 76, 1173-1185.
  14. Slosson, R. (2006). Slosson Intelligence Test Revised. New York: Slosson Educational Publications.
  15. Stifel, S. W. F., Feinberg, D. K., Zhang, Y., Chan, M. K., & Wagle, R. (2020). Assessment during the COVID-19 pandemic: Ethical, legal, and safety considerations moving forward. School Psychology Review, 49(4), 438-452.
  16. Wagner, L., Corona, L. L., & Weitlauf, A. S. (2021). Use of the Tele-Autism Spectrum Disorder PEDS for autism evaluations in response to COVID-19: Preliminary outcomes and clinician acceptability. Journal of Autism Developmental Disorder, 51, 3063-3072.
  17. World Health Organization. (2020). Corona virus disease (COVID-19) pandemic.
  18. Wosik, J., Fudim, M., Cameron, B., Gellad, Z. F., Cho, A., Phinney, D., …& Tcheng, J. (2020). Tele-health transformation: COVID-19 and the rise of virtual care. Journal of the American Medical Informatics Association, 27(6), 957-966.
  19. Wright A. J. (2018). Equivalence of remote, online administration and traditional, face-to-face administration of Woodcock-Johnson IV cognitive and achievement tests. Archives of Assessment Psychology, 8(1), 23-35.
  20. Wright, A. J. (2020). Equivalence of remote, digital administration and traditional, in-person administration of the Wechsler Intelligence Scale for Children, Fifth Edition. Psychological Assessment, 32(9), 809-817.

How to Cite this paper?


APA-7 Style
Jaffar, R., Ali, A.Z. (2021). Examining Ease and Challenges in Tele-Assessment of Children Using Slosson Intelligence Test. Pak. J. Psychol. Res, 36(4), 555-570. https://doi.org/10.33824/PJPR.2021.36.4.30

ACS Style
Jaffar, R.; Ali, A.Z. Examining Ease and Challenges in Tele-Assessment of Children Using Slosson Intelligence Test. Pak. J. Psychol. Res 2021, 36, 555-570. https://doi.org/10.33824/PJPR.2021.36.4.30

AMA Style
Jaffar R, Ali AZ. Examining Ease and Challenges in Tele-Assessment of Children Using Slosson Intelligence Test. Pakistan Journal of Psychological Research. 2021; 36(4): 555-570. https://doi.org/10.33824/PJPR.2021.36.4.30

Chicago/Turabian Style
Jaffar, Rabia, and Amena Zehra Ali. 2021. "Examining Ease and Challenges in Tele-Assessment of Children Using Slosson Intelligence Test" Pakistan Journal of Psychological Research 36, no. 4: 555-570. https://doi.org/10.33824/PJPR.2021.36.4.30