• Users Online: 289
  • Home
  • Print this page
  • Email this page
Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Subscribe Contacts Login 


 
 Table of Contents  
ORIGINAL ARTICLE
Year : 2022  |  Volume : 36  |  Issue : 2  |  Page : 37-44

Phoneme monitoring abilities in bilingual adolescents and young adults who stutter


Bangalore Speech and Hearing Research Foundation, Dr. S.R. Chandrasekhar Institute of Speech and Hearing, Bengaluru, Karnataka, India

Date of Submission18-Nov-2022
Date of Decision18-Dec-2022
Date of Acceptance19-Dec-2022
Date of Web Publication10-Jan-2023

Correspondence Address:
Ms. Archita Kumari
Postgraduate Institute of Medical Education and Research, Chandigarh
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jisha.jisha_30_22

Rights and Permissions
  Abstract 


Introduction: Stuttering occurs when the simultaneous and sequential programming of muscle movements required to produce a continuous flow of speech is disrupted. The generalized phoneme monitoring task, in which subjects detect target phonemes appearing anywhere in the test words, was shown to be sensitive to associative context effects. The aim is to investigate the phoneme monitoring abilities in L2 (English) among bilingual adolescents and young adults who stutter. This was a comparative study. Methods: Twenty-two bilinguals (11 persons with stuttering [PWS] and 11 persons with no stuttering [PWNS]) were considered within the age range of 10–16 years (adolescents) and 17–24 years (young adults). An adaptation of the Language Experience and Proficiency Questionnaire (LEAP-Q) to the Indian context was done on all the participants. The phoneme of English with the highest occurrence was considered. A list of picturable bisyllabic words was made using these target phonemes in initial and medial positions. The audio of targeted phonemes was prerecorded using PRAAT software. In phase 1, PsychoPy software was used to present the targeted phoneme along with the familiarized picture and record the responses of the participants. Keys of the keyboard were assigned to yes/no. In phase 2, the same picture was presented, and the response rate and accuracy in naming the pictures shown were calculated. Descriptive statistics and one-way ANOVA were done. Results: The results of the present study showed that bilingual PWNS took more time to identify the presence or absence of the target consonant when compared to bilingual PWS. Bilingual PWNS had a higher number of correct responses than bilingual PWS. With respect to the position of the target phoneme, incorrect responses were similar when the target phoneme was in the medial or initial position. Conclusion: The current study advances the theoretical understanding of the causes of stuttering, particularly by supporting the psycholinguistic causes of stuttering.

Keywords: Bilinguals, Language Proficiency Questionnaire, persons with stuttering, phoneme monitoring, PsychoPy


How to cite this article:
Kumari A, Ghadei A, Thontadarya S, Srividya A. Phoneme monitoring abilities in bilingual adolescents and young adults who stutter. J Indian Speech Language Hearing Assoc 2022;36:37-44

How to cite this URL:
Kumari A, Ghadei A, Thontadarya S, Srividya A. Phoneme monitoring abilities in bilingual adolescents and young adults who stutter. J Indian Speech Language Hearing Assoc [serial online] 2022 [cited 2023 Feb 5];36:37-44. Available from: https://www.jisha.org/text.asp?2022/36/2/37/367506




  Introduction Top


Stuttering is a disorder of fluency characterized by repetition and prolongation of sounds and syllables along with hesitations, pauses, and blocks interrupting the constant flow of speech. Undesirable movements in the face and extremities can accompany the disruption in the flow of speech. Van Riper stated that the cause of stuttering is heterogeneous. In linguistic aspects, mainly lexical access, phonological encoding was given the most attention.[1] Deficits in phonological encoding were meant to be a plausible cause for the disfluencies in stuttering. Many theories were proposed to show the link between phonological encoding and stuttering. Levelt explained language production under three sections, namely the conceptualize, formulator, and articulator.[2] EXPLAN theory by Howell stated that the disfluencies in the speech are due to temporal asynchronies between the execution and speech planning.[3] Complex linguistic unit planning, fast speech rate, and coping strategies are the reasons for the asynchronies. The covert repair hypothesis presupposes that the correction of errors in the speech plan caused by faulty phonological encoding to be the cause the stuttering.[4] WEAVER++ model explains the interaction between the planning, comprehending, and monitoring phases in speech production. WEAVER++ model proposes that the word planning is a stepwise process that transverses from lexical retrieval to word form encoding. WEAVER++ model is like Levelt model in assuming two routes, i.e., an internal and an external one.

Lexical access in bilingual speakers

Present lexical access models for bilingual speakers often presume that both languages have a semantic framework in common.[5],[6],[7],[8],[9],[10] Each bilingual's semantic/conceptual representations are related to its lexical components. Few researchers suggested that conceptual representations are influenced by language. These contemporary ideas have overwhelmingly supported the assumption that bilinguals have one single conceptual store which they can access through both languages, at least for commonly used words.

More modern ideas, on the other hand, suggest that the stimulation of a bilingual's semantic system extends to both languages, regardless of whether the language is set to respond.[5],[7],[10],[11] These theories assert that a bilingual's both languages are stimulated simultaneously, independent of the language employed for production. To put it another way, current models presume that the general spreading activation principle is used to engage two lexicons of a bilingual at the same time.

Role of phoneme monitoring in bilinguals

Conventionally, the phoneme monitoring task has been employed to investigate the phonological representations that are involved in speech perception. Participants must determine whether a target phoneme (or a letter equivalent to that phoneme) appears in an auditorily presented stimulus in this test.

Using the phoneme monitoring task, Colomé explored phonological activation in a nontarget language.[12] When used in bilingualism, phonemes are experimentally manipulated to fall into one of three categories: (a) response language (answer “yes” – filler trial); (b) nonresponse language (answer “no” – critical trial); and (c) part of neither language (answer “no” – control trial). The participants in this study took longer to reject the phoneme that appeared in Spanish (nontarget language) than the control phonemes that did not exist in Catalan or Spanish.

Shivabasappa and Krishnan did a study to examine the first issue – the nature of lexical selection in bilinguals using phoneme monitoring task in two orthographically distinct languages (Kannada – alpha syllabic; English – alphabetic). A total of 120 images were sorted into two blocks (each with 60 items) for naming in Kannada (L1) and English (L2). The remaining eight pictures, four in each language, were utilized as testing items. They conducted this study in two blocks and each had pictures that were presented in three conditions: related, unrelated, and control. They conducted this study using DMDX software. According to the findings of this study, respondents in the related condition took substantially longer to refuse a phoneme that was available in the nontarget language. Both in Kannada and English, this was the situation.[13]

Phonological encoding in bilinguals with stuttering

A study of phonological encoding ability in ten people who stutter and ten people who do not stutter was conducted by Sasisekaran and De Nil.[14] Phoneme monitoring was required for silent picture naming and auditory perception tests, with noun phrases and compound words as stimuli. In phoneme monitoring of silent naming, an analysis comparing reaction time between the groups revealed that people who stutter have significantly lower reaction times than people who do not stutter. In the auditory perception task, there was no significant difference in phoneme monitoring reaction time. Persons with stuttering (PWS) have phoneme monitoring impairments rather than perceptual deficits, according to the results.

Another research done by Darshini investigated phoneme monitoring in persons with stuttering silent picture naming and auditory perception tasks in India. All of the participants were native Kannada speakers who spoke only Kannada. The stimuli were 27 Kannada trisyllabic (CVCVCV) syllables including the target phonemes/p/,/s/,/t/,/m/,/k/,/r/,/b/, and/h/in the beginning, medial, or final positions. Results showed that in both tasks, the difference in reaction time between PWS and persons who do not stutter (PNS) was considerable. Only the silent naming task showed a significant difference in accuracy. In comparison to the medial and final positions, phonemes in the starting positions showed a slower reaction time and higher accuracy.[15]

Sangeetha researched phonological encoding skills on 30 adults who are bilinguals: 15 of them are bilingual (Kannada–English) adults who stutter and the others are bilingual adults who do not. Simple motor tasks, picture familiarization and naming, phoneme monitoring tasks, and auditory tone monitoring tasks were among the four tasks included in the investigation. Bilingual adults who stutter (BAWS) exhibited longer response times and lower accuracy in the basic motor test than bilingual adults who do not stutter (BAWNS). The statistical significance, on the other hand, is not apparent. Similar results were observed in the phoneme and auditory tone monitoring tasks, although with a statistical difference. The phonological abilities of the second language and first language did not differ much, according to this study.[16]

Very few studies were done using the phoneme monitoring paradigm on Indian bilingual persons. As the number of bilingual speakers in India is more, there is a need to study how the phonological encoding abilities differ in bilingual adults.[17] Very less studies were done on bisyllabic words in the second language to know the position effect. For this reason, the present study focuses on phoneme monitoring in bisyllabic words in bilingual adults. Numerous studies have shown that children with stuttering have less efficiency when performing phonological encoding, but none of these studies have stated with certainty that this inefficiency is due to a delay in the timely encoding of phonemic segments during speech production, the presence of more errors during the phonological encoding process, or a combination of the two in young adults.[16] Even though research has been done in Western culture, the conclusions cannot be applied to other languages. Thus, it became necessary to look at young adults and adolescents who stammer in an Indian context's phonological encoding abilities.

The present study aimed to investigate the phoneme monitoring abilities of bilingual adolescents and young adults who stutter.

Objectives of the study

  1. To analyze and compare the reaction time and accuracy in phoneme monitoring tasks
  2. To check the influence of the position of the target phoneme on monitoring abilities
  3. To compare phoneme monitoring tasks in both groups.



  Methods Top


Participants

The present study included two groups: the clinical group consisted of 11 bilingual adolescents and young adults who stutter whereas the control group consisted of 11 bilinguals adolescents and young and young adults who do not stutter. Both the groups were matched for age, gender, handedness, socioeconomic status and educational level. The sample size was calculated by a statistician using G*Power based on a study done by Sangeetha and Geetha.[18]

Participants who were proficient in the English language within the age range of 10–16 years (adolescents) and 17–24 years (young adults) were considered for the study. Participants with negative history of neurological problems, intellectual, sensory (vision and hearing), or other communication disorders were considered.

In the clinical group, individuals with developmental stuttering with a severity above moderate degree assessed on stuttering severity index (SSI)-4 (Riley and Bakker, 2009) by a speech-language pathologist were selected.[19] The Language Experience and Proficiency Questionnaire (LEAP-Q), an adaptation to the Indian context (Maitreyee and Goswami, 2009), was done on all the subjects. All the participants have explained the purpose and procedure of the study.[20]

Procedure

The phoneme monitoring task was done in two stages:

  • Stage 1: Stimulus preparation and task design
  • Stage 2: Administrating task on participants


Stage 1: Stimulus preparation and task design

Stimuli

For the present study, phonemes/r/,/l/,/s/,/z/,/v/,/tʃ/,/ʃ/, and/h/of English with the highest occurrence were considered.[21] Twenty-eight picturable bisyllabic words (CVCV) having the target consonants at either beginning or medial position were considered. A set of 84 colorful images were obtained using 28 picturable bisyllabic words under two conditions: noncongruent and congruent. In the noncongruent condition, each visual stimulus was preceded by a verbal phoneme that was not a part of the picture's name in English (L2) and thus needed a NO response. For example, during identifying the picture of Lego in L2 (English), the verbal recording of/p/was presented, which was not a part of the picture name in English (Lego) and thus needed a NO response. In the congruent condition, each visual stimulus was preceded by a verbal phoneme that was a part of the picture's name in English (L2) and thus needed a YES response. For example, during identifying the picture of Fairy in L2 (English), the verbal recording of/r/was presented, which was a part of the picture name in English (Fairy) and thus needed a YES response.

Two speech-language pathologists evaluated and verified the images based on three criteria: image appropriateness, image agreement, and word familiarity. The parameters were rated for a: image and name agreement-(picture to name comparability: no comparability, good comparability); b: word familiarity-(familiar of the target noun in everyday usage): unknown, well known; c: image appropriateness-(to see if the target noun is age appropriate): inappropriate, appropriate.

The words and pictures that scored lower were changed with more acceptable items. Two speech-language pathologists validated the audio samples of the target consonants paired with vowel/a/. Eighty-four pictures were presented in the phoneme monitoring paradigm. Out of these, 28 were of congruent condition that needed a YES response and 56 were of noncongruent condition, thus needing a NO response.

Instrumentation

PRAAT software (version 5.3, developed since 1992 by Paul Buersma and David Weenink) was used to prerecord the target in a sound-treated room at an appropriate intensity. PsychoPy software (version v2021.1.4, developed by Jonathan Peirce) was used for presenting the prerecorded audio of target phoneme succeeded by familiarized picture and also to record the responses given by participants. Right and left keys on keyboard were assigned to the YES and NO responses, respectively.

Test design

  • Phase 1 [Figure 1]: A blank screen appeared for 700 ms before the target phoneme was presented. The target phoneme was followed by a 3000 ms presentation of an image. There was a 700 ms interstimulus interval. After viewing the image, the participants were instructed to instantly press YES or NO. An interstimulus interval of 700 ms is given before the following item is presented. Response sheet was extracted after all the phonemes were completed for each participant
  • Phase 2 [Figure 2]: A blank screen appeared for 700 ms before the target phoneme was presented. The target phoneme was followed by a 3000 ms presentation of an image. There was a 700 ms interstimulus interval. After viewing the image, the participants were instructed to name the picture shown. Response sheet was extracted after all the phonemes were completed for each participant.
Figure 1: Pictorial representation of Phoneme Monitoring Task for yes/no response in Phase 1

Click here to view
Figure 2: Pictorial representation of Phoneme Monitoring Task for naming the shown picture in Phase 2

Click here to view


Stage 2: Administrating task on participants

Procedure

First, each participant was given with the list of 28 stimulus pictures and their names to familiarize them with the stimulus and their names. After 1 h, the phoneme monitoring task was carried on. The participants were explained that they would hear a phoneme and then see a familiarized picture on screen.

For Phase 1, participants were also instructed to press YES (right arrow on keyboard) if the phoneme was present in the picture's name by silent naming and to keep track of the target phonemes regardless of their position or the vowel associated with the phoneme in image's name, i.e., the phoneme might be in the beginning or in the medial position and it could be associated with any vowel in the name of image. If the phoneme is missing from the picture's name, participants were instructed to press NO (left arrow on keyboard). Participants were also given instruction to react fast and precisely.

For Phase 2, a similar set of pictures were given and participants were asked to name the picture after phoneme presentation followed by picture presentation.


  Results Top


Comparison of reaction time among both groups in phoneme monitoring task

For Phase 1, in phoneme monitoring task, the presence or absence of a target phoneme was to be monitored by gazing at a picture and silently naming it. The reaction time was measured from the time the stimuli were presented until the subject responded by pressing YES or NO.

Descriptive analysis was done to know the mean values of reaction time of the two groups in Phase 1 [Table 1]. A descriptive analysis was done, and the values of the reaction time in phoneme monitoring task are listed for congruent, noncongruent, and overall reaction time.
Table 1: Reaction time in phoneme monitoring task across both groups in phase 1

Click here to view


On comparison, it was revealed that PWS had greater overall reaction time and [Figure 3] noncongruent reaction than persons with no stuttering (PWNS), i.e., PWS took more time to react after the presentation of the stimuli where PWNS took more time to react after presentation of stimuli in congruent condition and hence had greater reaction time than PWS.
Figure 3: Overall Reaction Time in Phase 1 among PWS and PWNS

Click here to view


One-way ANOVA (Kruskal–Wallis) was done [Table 2], and the results showed that there was a significant difference between PWS and PWNS for all three values of Reaction time (RT) (overall, congruent, and noncongruent) conditions.
Table 2: One-way ANOVA across reaction time in phase 1

Click here to view


For Phase 2, descriptive analysis was done to [Table 3] know the mean values and median of reaction time of the two groups. A descriptive analysis was done, and the values of the reaction time in Phase 2 are listed for congruent, noncongruent, and overall reaction time.
Table 3: Reaction time in phoneme monitoring task across both groups in phase 2

Click here to view


On comparison, it was revealed that PWS had greater reaction time in all three conditions (overall reaction time [Figure 4], congruent condition, and noncongruent condition), i.e., PWS took more time to react after the presentation of the stimuli than PWNS.
Figure 4: Overall Mean and Median of Reaction time in Phase 2 among PWS and PWNS

Click here to view


One-way ANOVA (Kruskal–Wallis) was done [Table 4], and the results showed that there was a significant difference between PWS and PWNS for all three values of RT (overall, congruent, and noncongruent) conditions.
Table 4: One-way ANOVA across reaction time in phase 1

Click here to view


Comparison of accuracy among both groups in phoneme monitoring task

Descriptive analysis was done [Table 5] to know the mean values of accuracy of the two groups. A descriptive analysis was done, and the values of accuracy in phoneme monitoring task are listed for congruent, noncongruent, and overall reaction time.
Table 5: Accuracy in phoneme monitoring task across both groups

Click here to view


On comparison, it was revealed that PWNS had greater accuracy in all three conditions (overall accuracy [Figure 5], congruent condition, and noncongruent condition), i.e., PWNS answered correctly for most of the stimulus as compared to PWS.
Figure 5: Overall Accuracy in Phoneme Monitoring Task among PWS and PWNS

Click here to view


One-way ANOVA (Kruskal–Wallis) was done [Table 6], and the results showed that there was no significant difference between PWS and PWNS for all three values of RT (overall, congruent, and noncongruent) conditions.
Table 6: One-way ANOVA across accuracy in phoneme monitoring task

Click here to view


The influence of position of target phoneme on monitoring abilities

Descriptive statistics, i.e., percentage of correct responses in 28 stimuli (congruent condition) in initial and medial position across groups, was done, and the results showed that there were not much differences seen in percentage of correct responses in congruent condition in initial and medial positions [Table 7].
Table 7: The influence of position in congruent condition of target phoneme on monitoring abilities

Click here to view



  Discussion Top


The purpose of the current study was to compare the phonological monitoring skills of bilingual PWS to those of bilingual PWNS. The two main measures considered were reaction time and accuracy. The time taken to respond YES or NO after the presentation of the stimuli or the verbal response after the presentation of the stimuli is called the reaction time and accuracy means the number of correct responses in each task. PWNS and PWNS in the age range of 10–24 years were taken into consideration for this study in Hindi–English. All the stimuli were prepared and presented in the second language (English), and reaction time, accuracy, and position effect were noted.

Phoneme monitoring task – Reaction time and accuracy measures

In phoneme monitoring task, PWS took more time (3.01 ± 0.475) to correctly respond than PWNS (1.32 ± 0.0580) to identify the presence or absence of phoneme presented in the picture shown and the verbal response after the presentation of stimuli. PWS have a slower reaction time than PWNS in phoneme monitoring tasks. However, there was no significant difference seen between groups which indicated that PWS have higher reaction time and low accuracy as compared to PNS in phoneme monitoring of silent naming and voice response. A similar study was done by Sangeetha in an Indian context which showed that there were significant differences seen in phoneme monitoring abilities of Bilinguals with Stuttering (BWS) and Bilingual with no Stuttering (BWNS). BWS took more time to react with less accuracy as compared to BWNS and thus was concluded that BWS have monitoring deficits along with phoneme encoding difficulties.[16]

Another study was done by Sasisekaran J, Luc F, Smyth R, and Johnson C (2006) which showed contrary results. The results revealed that there were no significant between-group differences for any of the tasks, and that PWS and PWNS had similar reaction times for monitoring initial consonants of Chinese syllables (Shenmu), but that PWS had a significantly slower reaction time for monitoring simple or compound vowels (Yunmu).[22]

These results provide credence to psycholinguistic models of stuttering that blame disfluencies in PWS on phonological encoding.[3],[8],[23],[24] The fact that BAWNS performed better on the PM task than BAWS demonstrates that people who stutter have challenges with phonological encoding, which suggests that linguistic issues rather than motor issues may be responsible for the disfluencies.[25] It is plausible that the familiarization task, which included naming the stimuli prior to the task, would have facilitated lexical access and reduced group differences.

Position effect – Initial position versus medial position

The present study revealed that error was seen more in medial position than in initial position across all three groups [Table 7]. PWNS had more percentage of correct responses in initial position (100%) than in medial position (99.30%). PWS had more percentage of correct responses in initial position (93.70%) than in medial position (92.12%).

These results agree with the study by Costa and Caramazza which stated that during speech production, the phonological encoding of the segments in the first position happens prior to the encoding of the segments in the second or third positions.[26] To keep up this, a study by Wheeldon and Levelt also stated that in speech production, left-to-right succession of phonological encoding takes place.[25]

Another study was done by Dijkstra, Roelofs, and Fieuws in orthographic effects on phoneme monitoring. The study strongly concluded that position of the target phoneme in word had an influence in phoneme monitoring task.[27] Possible explanations for the observed findings given by them are as follows: reaction time difference can be considered a measure of the lexical contribution to the phoneme-detection process; reaction time pattern activates feedback from lexical level to sublexical level; orthographic representation of phonemes will help in more stable representation of the target in working memory over time.

Another study on lexical access in adults who stutter was done by Howell and Ratner.[28] These authors have investigated the latency of lexical access of nouns and verbs in adults who stutters and compared it with adults who do not stutter using phoneme monitoring paradigm. The results showed that verbs are slower to monitor than nouns in both groups, initial phonemes have faster and more accurate response when compared to medial and final phonemes, and adults who stutter had more difficulty in medial phonemes. Nonsignificantly adults who do not stutter performed well than adults who stutter.


  Conclusion Top


When compared to PWNS, PWS in the current study generally exhibit phoneme monitoring deficits. The threshold to begin covert repairs was estimated to be lower for PWS since they tend to be extremely vigilant in monitoring the errors in their motor plan (vicious circle hypothesis). The current study advances the theoretical understanding of the causes of stuttering, particularly by supporting the psycholinguistic causes of stuttering.

Future directions

The present study can be done on a larger population and across age groups to see the developmental trend of phonological encoding skills. A study to check the variations in phonological encoding across different proficiencies can be performed. Furthermore, this can be done in all the phonemes across each language of India. A study can be done in various disorders in which lexical access will be affected.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Van Riper C. The Nature of Stuttering. Old Tappan, NJ: Prentice Hall; 1982.  Back to cited text no. 1
    
2.
Levelt WJ. Speaking: From Intention to Articulation. Cambridge, MA: MIT Press; 1989.  Back to cited text no. 2
    
3.
Howell P. Assessment of some contemporary theories of stuttering that apply to spontaneous speech. Contemp Issues Commun Sci Disord 2004;31:122-39.  Back to cited text no. 3
    
4.
Postma A, Kolk H. The covert repair hypothesis: Prearticulatory repair processes in normal and stuttered disfluencies. J Speech Hear Res 1993;36:472-87.  Back to cited text no. 4
    
5.
De Bot K. A bilingual production model: Levelt's speaking model adapted. Appl Linguist 1992;13:1-24.  Back to cited text no. 5
    
6.
Costa A, Miozzo M, Caramazza A. Lexical selection in bilinguals: Do words in the bilingual's two lexicons compete for selection? J Mem Lang 1999;41:365-97.  Back to cited text no. 6
    
7.
Green DW. Control, activation, and resource: A framework and a model for the control of speech in bilinguals. Brain Lang 1986;27:210-23.  Back to cited text no. 7
    
8.
Kroll JF, Stewart E. Category interference in translation and picture naming: evidence for asymmetric connections between bilingual memory representations. J Mem Lang 1994;33:149-74.  Back to cited text no. 8
    
9.
Potter MC, So KF, Eckardt BV, Feldman LB. Lexical and conceptual representation in beginning and proficient bilinguals. J Verbal Learning Verbal Behav 1984;23:23-38. Available from: https://www.sciencedirect.com/science/article/abs/pii/S0022537184904894. [Last accessed on 2020 Mar 08].  Back to cited text no. 9
    
10.
Poulisse N, Bongaerts T. First language use in second language production. Appl Linguist 1994;15:36-57.  Back to cited text no. 10
    
11.
Poulisse N. Language production in bilinguals. In: de Groot AM, Kroll JF, editors. Tutorials in Bilingualism: Psycholinguistic Perspectives. Mahwah, NJ: Lawrence Erlbaum Associates. 1997. p. 201-24.  Back to cited text no. 11
    
12.
Colomé À. Lexical activation in bilinguals' speech production: Language-specific or language-independent? J Mem Lang 2001;45:721-36.  Back to cited text no. 12
    
13.
Shivabasappa PB, Krishnan G. Language non-specific lexical activation in bilinguals: Evidence from the phoneme monitoring task. J All India Inst Speech Hear 2011;30:160-8.  Back to cited text no. 13
    
14.
Sasisekaran J, De Nil LF. Phoneme monitoring in silent naming and perception in adults who stutter. J Fluency Disord 2006;31:284-302.  Back to cited text no. 14
    
15.
Darshini KJ. Phonological Encoding in Persons with Stuttering through Phoneme Monitoring Tasks. Mysore: An Unpublished Dissertation Submitted to University of Mysore; 2015.  Back to cited text no. 15
    
16.
Sangeetha M. Phonological Encoding Abilities in Bilingual Adults who Stutter. ARF Project. Mysore: All India Institute of Speech and Hearing; 2018.  Back to cited text no. 16
    
17.
Multilingualism in India. Available from: https://en.wikipedia.org/wiki/Multilingualism_in_India. [Last accessed on 2022 Nov 29].  Back to cited text no. 17
    
18.
Sangeetha M, Geetha MP. Phonological Encoding in Children Who Stutter. Mysore: ARF Project, All India Institute of Speech and Hearing; 2017.  Back to cited text no. 18
    
19.
Riley G, Bakker K. SSI-4: Stuttering Severity Instrument. University of Texas at Dallas: PRO-ED, An International Publisher; 2009.  Back to cited text no. 19
    
20.
Maitreyee R, Goswami SP. Language Proficiency Questionnaire: An Adaptation of LEAP-Q in Indian context. An Unpublished Dissertation Submitted to University of Mysore; Mysore; 2009.  Back to cited text no. 20
    
21.
Howell P, Au-Yeung J, Yaruss JS, Eldridge K. Phonetic difficulty and stuttering in English. Clin Linguist Phon 2006;20:703-16.  Back to cited text no. 21
    
22.
Sasisekaran J, De Nil LF, Smyth R, Johnson C. Phonological encoding in the silent speech of persons who stutter. J Fluency Disord 2006;31:1-21.  Back to cited text no. 22
    
23.
Wingate M. The Structure of Stuttering. New York: Springer Verlag; 1988.  Back to cited text no. 23
    
24.
Perkins WH, Kent RD, Curlee RF. A theory of neuropsycholinguistic function in stuttering. J Speech Hear Res 1991;34:734-52.  Back to cited text no. 24
    
25.
Wheeldon LR, Levelt WJ. Monitoring the time course of phonological encoding. J Mem Lang 1995;34:311-34.  Back to cited text no. 25
    
26.
Costa A, Caramazza A. The production of noun phrases in English and Spanish: Implications for the scope of phonological encoding in speech production. J Mem Lang 2002;46:178-98.  Back to cited text no. 26
    
27.
Dijkstra T, Roelofs A, Fieuws S. Orthographic effects on phoneme monitoring. Can J Exp Psychol 1995;49:264-71.  Back to cited text no. 27
    
28.
Howell TA, Bernstein Ratner N. Use of a phoneme monitoring task to examine lexical access in adults who do and do not stutter. J Fluency Disord 2018;57:65-73.  Back to cited text no. 28
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5]
 
 
    Tables

  [Table 1], [Table 2], [Table 3], [Table 4], [Table 5], [Table 6], [Table 7]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Introduction
Methods
Results
Discussion
Conclusion
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed321    
    Printed12    
    Emailed0    
    PDF Downloaded63    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]