Detecting Vague Words & Phrases in RequirementsDocuments in a Multilingual EnvironmentBreno D. Cruz , Bargav Jayaraman† , Anurag Dwarakanath† , and Collin McMillan Departmentof Computer Science and EngineeringUniversity of Notre Dame, Notre Dame, IN, USAEmail: {bdantasc, cmc}† Accenture Technology Labs IndiaBangalore, Karnataka, IndiaEmail: {bargav.jayaraman, ess in software requirements documents canlead to several maintenance problems, especially when the customer and development team do not share the same language.Currently, companies rely on human translators to maintain communication and limit vagueness by translating the requirementdocuments by hand. In this paper, we describe two approachesthat automatically identify vagueness in requirements documentsin a multilingual environment. We perform two studies forcalibration purposes under strict industrial limitations, anddescribe the tool that we ultimately deploy. In the first study,six participants, two native Portuguese speakers and four nativeSpanish speakers, evaluated both approaches. Then, we conducted a field study to test the performance of the best approachin real-world environments at two companies. We describe severallessons learned for research and industrial deployment.I. I NTRODUCTIONRequirements documents in software development projectsare almost always written in a natural language such asEnglish [27]. Using natural language makes the documentseasy to comprehend [1] and share. However, usage of naturallanguage also brings in the fallacies of language such asambiguities and vagueness. As Berry and Kamsties point out,such problems in requirements confuse stakeholders, leadingto “diverging expectations and inadequate or undesirably diverging implementations” [6].Vagueness is the phenomenon which makes a statementhave multiple interpretations due to a lack of precision.Examples include “The system should respond as fast aspossible”. Here the interpretation of “as fast as” can rangefrom a few milliseconds to a few days. It is widely believedthat such problems in requirements including vagueness shouldbe pointed out to the authors of the documents [6], [17], [31].Automated tool support that detects problematic words andphrases is desirable because human readers tend to resolvethe issues unconsciously and unknowingly [25]. In the aboveexample, a human reader may believe that a response time ofa few seconds would suffice. If his or her believed meaningis different from that of the author’s, then an inconsistencymay exist without either person being aware of it. Automatedtools can help prevent these situations by bringing them to thereader’s attention so that he or she may resolve the vaguenessconsciously, possibly with the assistance of the author.Many tools and techniques have been developed for theautomated identification of ambiguities and vagueness insoftware requirements. These include the use of a manuallycurated checklist [31], [16], usage of linguistic parsers [29] andsupervised machine learning techniques [32]. These tools pointout ambiguities and vagueness in a single natural language.Research Problem & Industrial Context The context ofour work is a large multinational corporation which has aninternally-developed tool that detects vagueness in requirements documents in English only. The English-only tool is essentially a blacklist that highlights words that are in a manuallycurated list of words that are frequently vague. While relativelysimple conceptually, the blacklist was extremely expensiveto create and is popular among the company’s clients. Thecompany views the tool as an asset and competitive advantage.The company has clients in several countries in whichEnglish is not a first language. Therefore, the company iseager to answer the research problem: can the English-onlytool be adapted to serve their non-English-speaking customersthrough the use of existing machine translation technology?I.e., without the need to manually curate a new blacklist.Our Strategy to Address Research Problem We proposedtwo modifications to the existing English-only tool, and testedthese modifications in studies with domain experts at twocompanies: the large multinational (denoted MC throughoutthe paper for client confidentiality), and a small Braziliansoftware developer (BC). We studied vagueness detection intwo target languages: Spanish and Brazilian Portuguese. Themodifications are, generally speaking, different locations atwhich we place a machine translator. First, we attempt the leastexpensive idea: to translate the blacklist directly from Englishinto a target language. Second, we build a more complexapproach: to translate the entire requirements document fromthe target language into English, use the English-only tool tomark the document, and then use the word alignment from thetranslator to highlight the vague words in the target languagedocument. Details and evaluation of the first modificationare in Sections III and V. For the second modification, seeSections III, V, and VII.We found several important “lessons learned” for otherresearchers, which we present in Section VIII.

II. BACKGROUND & R ELATED W ORKThe usage of natural language for the documentation ofsoftware requirements brings the shortcoming of language intosoftware engineering. There are various fallacies in naturallanguage such as ambiguities (where a statement can havemultiple distinct interpretations), vagueness (where a statementcan have a continuum of interpretations), uncertainty (where astatement weakens its proposition of truth) and others. In computational linguistics and related literature, these phenomenaare considered distinct and studied separately. In the domainof software engineering, such phenomena is generally groupedunder the class of ambiguity [29] and studied accordingly.In domain of software engineering, ambiguity is consideredone of the key issues in the usage of natural languageto document software requirements. Ambiguity is generallydefined as the property leading to ‘multiple interpretations’ [5],[14]. It is also recognized that ambiguities can arise due to thelinguistic characteristics of language (e.g. the phrase ‘as fastas possible’, which is ambiguous, because it is open to morethan one interpretation, and vague, because it is uncertain) andas well as due to the analyst’s understanding of the domain i.e. the knowledge of a domain can make an analyst interpreta requirement in multiple ways. An example of a domainspecific ambiguity as presented in [18] is “if a bank customermaintains a minimum balance in his or her account, thereis no monthly service charge”. Here, the knowledge of thegeneral practice in banking raises the ambiguity of whetherthe minimum balance is to be counted on a per day basis oris the average of the daily balance in a month. Ambiguitiesas in the latter example are referred to as ‘RE Context’ [18]or the ‘Software Engineering Context’ [5], [20]. It is alsoevident that the same understanding of the domain can actuallyhelp disambiguate a requirement. An example from [18] is thephrase ‘generate a dial tone’. The characteristics of a dial tone,may be precise to domain experts but ambiguous to others.It is strongly believed that ambiguity in requirements shouldbe tackled as early as possible in the creation of the documents. Ambiguous requirements can lead to building systemsthat miss stakeholder expectations [5], [32], involve costlyrework [5], [11], or even invoke litigation [5]. It is seenthat fixing errors later in the software development life cyclesignificantly increases project costs. Kiyavitskaya et al. [20]suggest such costs increase exponentially. Boehm et al. [7]report that the cost to fix a defect in the testing phase is 100times more expensive than finding and fixing the defect in therequirements and design phases.Ambiguity in natural language requirements have long beenstudied in software engineering. Most solutions use a set oflook-up lists to identify words and phrases that are ambiguous.Such studies include the work of Berry et al. [5], Tjong etal. [29] and Gleich et al. [15]. Supervised learning methodshave also been employed [32], [23], [4].In computational linguistics, vagueness is said to be thatproperty of words and phrases due to which you cannot assigna specific and precise meaning to a sentence [12], [10]. Anexample of a vague word is ‘tall’. It isn’t clear what constitutesa ‘tall’ person. If a person of 6 feet in height is considered tall,then would a person of 5 feet 11 inches not be considered tall?Such borderline cases characterize the vagueness phenomenon[19]. In contrast to vagueness, ambiguity is when there areseveral but distinct and precise meaning to a sentence [12],[5]. Vagueness is also said to be ‘inquiry resistant’ [19], [5]i.e. no amount of empirical measurement of the height of aperson would clarify if the person is tall.There have been several studies to automatically identifyhedging and uncertainty, which are related to vagueness. Someof the work includes [32], [24] which use supervised machinelearning techniques. A semi-supervised learning approach hasbeen made in [24]. Most studies focus on English and a fewhave been made in other languages such as Hungarian [30].Jain et al. [16] and Bucchiarone et al. [8] present the identification of problem phrases in software requirements using a listof words and phrases manually curated by business analysts.The list was specifically built for the requirements typicallyseen in software engineering and consists of 189 entries.This list covers different phenomena including ambiguities,vagueness, and uncertainty. The majority of the entries (68%)maps to the phenomenon of vagueness. This tool has been usedin several hundred projects and has seen good applicability.Our experience on the usage of the tool has been that thephenomenon of vagueness is most commonly accepted by endusers as a genuine problem in requirements (i.e. is a truepositive). A recent empirical study [9], [22] also showed thatthe phenomenon of vagueness is the most prominent categoryof the various problems in software requirements. We thus aimto study the identification of vagueness across languages.However, all the works in software engineering and computational linguistics focus on a single language, usually English.To the best of our knowledge, our work is the first to study themulti-lingual context where the domain specific knowledge ofvagueness captured in English is transferred across languagesfor requirements documents.III. O UR A PPROACHThis section covers our approach to detect the vague wordsand phrases in requirements documents. The input to ourapproach is a requirements document in natural language. Theoutput is a markup of the requirements document that indicatesthe vague words and phrases. We first describe an approachfor English documents. This English approach is based on aprocedure in use at the Multinational Corporation (MC). Then,we describe two novel variants we created for Portuguese1 .In general, the approach works in three phases: 1) parse therequirements document to divide it into sentences, 2) tag theparts of speech of each word in the sentences, 3) filter thewords for adjectives and adverbs that match a blacklist ofknown vague terms or does not match a whitelist of not vagueterms. At present, the creation of the English blacklist andEnglish whitelist is proprietary to MC. Note that the research1 Specifically,Brazilian Portuguese.

contribution of the approach in this paper is the application ofthe list to a multilingual environment, not the creation of theEnglish blacklist or English whitelist.A. Approach for EnglishThe architecture of the approach for English is in Figure 1.First, a Sentence Parser reads a Requirements Documentto separate the document into a list of sentences for eachrequirement (area 1). The parser finds breaks in the documentthat separate the requirements, e.g., subsection headers or rowsin a table. Then each requirement is broken into a list ofsentences; in our implementation we used a sentence detectorfrom the the openNLP library [2]. Next, we use the parts-ofspeech tagger in OpenNLP to mark each word in each sentencewith its most likely type (area 2). Note that we do not performtext preprocessing such as splitting or stop-word removal,since it may impact the performance of the POS tagger andmay also have a significant impact in other implementationsif the requirements document includes some language similarto source code (e.g., camel case word combinations).We then filter the parts of speech to remove any wordsthat are not modifiers, e.g. adjectives and adverbs (area 3).Our rationale is that the vagueness from modifiers is moredifficult for readers to resolve than vagueness from nouns orverbs, as discussed in Section II. Then, we apply another filterto remove words that are not in a blacklist of known vagueterms (area 4). The blacklist we used is created in a proprietaryprocess. We used the proprietary blacklist because it was in useat MC, which is where we conduct our evaluation. However,freely-available solutions exist that could be used to implementour approach in another environment [13]. In addition to theblacklist we also use a whitelist for cases where a term isnot present in the blacklist. The whitelist contains words thatare considered as not vague. The whitelist we used is alsocreated in a proprietary process. Another possible alterationto our approach for some environments may be to skip thefiltering for modifiers, if the stakeholders in that environmentexperience difficulty resolving vagueness of words other thanmodifiers.The final step is to markup the requirements documentso that the vague words and phrases are readily-available tostakeholders (area 5). In our implementation, we convert therequirements document to a PDF, and then add highlightingcolors to each word that is detected as vague. We also createan index of the phrases that contain these words. The index isnavigable via a PDF viewer.B. Variants for Target LanguagesWe designed two variants of the tool for a target language.Both variants are adaptations of the English approach, usinga machine translator at different stages of the process. Inthe first variant (V1), shown in Figure 2, the procedure isidentical to the English approach, except that we use a machine translator to translate the English blacklist and EnglishWhitelist into a blacklist and whitelist for the target language(Figure 2, area 1). In cases where one word in English hasFig. 1. Architecture of the Approach for English. Details describing thisapproach are in Subsection III-BFig. 2. Architecture of Variant 1, an adaption of the English tool to a targetlanguage. Details describing this Variant are in Subsection III-Bmultiple translations in the target language, the blacklist andthe whitelist will contain all translations in the dictionary.The advantage of V1 is that the machine translator can berelatively unsophisticated, as only dictionary translations forsingle words are necessary. A disadvantage is that it is likely tooverestimate the blacklist and whitelist. We explore the degreeof this overestimation in Section IV. We choose to implementthe variants for Portuguese and for Spanish due to the requestfrom MC. However it is possible to configure the machinetranslator to read and translate other languages, which we didnot explore in this study.The second variant (V2), shown in Figure 3, uses a machinetranslator to translate the requirements document from thetarget language into English (Figure 3, area 1). Then theprocedure is identical to the procedure for the English tool,until the vague terms list in English is converted back to alist in the target language (Figure 3, area 2). We accomplishthis conversion by maintaining an “alignment” list during thetranslation of the target language document into English. The

Fig. 3. Architecture of Variant 2, an alternative adaption of the English toolto a target language. Details describing this Variant are in Subsection III-Balignment document records which word in the target languageis translated into which word or words in English, for eachsentence. In Figure 3 area 2, we use the alignment to mark thewords in the target language as vague, that were marked asvague for English. While V2 does necessitate a more complexmachine translator than V1 (e.g., one capable of alignment),an advantage is that the translator is less likely to overestimatethe number of vague words, because the translator will onlypick one word from the blacklist. For example in V2, theEnglish word “should” will be translated as either “deveria”or “devia” in a Portuguese sentence, not both. In V1, “should”would be placed in the blacklist as both “deveria” and “devia”.The larger number of translations in V1 could result in reducedprecision.C. ImplementationWe implemented both variants for Portuguese. We alsoimplemented V2 for Spanish, given our experience evaluatingV1 and V2 for Portuguese (see Section IV). The machinetranslator we used in both cases was version 1.0 of Moses [21].IV. P ILOT S TUDY P ROCEDUREThis section describes two pilot studies we conducted. Thefirst involves both V1 and V2 for Portuguese. The second,which is informed by the first study, involves only V2 forSpanish. Note that these pilot studies are intended to calibrateand inform our research; a full evaluation of the final resultsis in Section VII.A. Research QuestionsBroadly speaking, our objective for the pilot studies is togain knowledge about how the variants behave for differentlanguages. Due to resource constraints, it is not feasible toconduct a full scale evaluation on every possible configuration of the approach. Therefore, we pose the following twoResearch Questions in a limited pilot study:RQ1 What is the performance in terms of quality of V1as compared to V2?RQ2 What terms are considered vague in the Englishblacklist, that are not considered vague in the Portuguese and Spanish translations of that blacklist?The purpose of RQ1 is to determine how to allocateresources for future work on the variants. Since resources wereonly available to continue development and evaluation of oneof the variants, we pose RQ1 to provide data to assist in thedecision. Variant V 1 is a less-expensive option overall becausethe translation of a blacklist is limited to a dictionary. To justifythe cost of the complex machine translation tool required forV 2, it is important that there is evidence that V 2 has improvedperformance.The rationale behind RQ2 is that some words that are vaguein English, are not necessarily vague in all languages. Forexample, the word “skin” in English is ambiguous because itcould mean “body tissue” or a “to remove”. But in Portuguese,the word is not vague because it is translated as “pele” ,which definitively means the body tissue and not to remove.This pilot study is a first step to identifying the degree ofthis problem, and correcting it for later development andevaluation. Note that this pilot study is intended to guide ourdevelopment – a thorough evaluation of this research questionis conducted in Section V.B. MethodologyThe methodology that we used to answer RQ1 and RQ2 is auser study with a small number of bilingual human evaluators.We split our user study into two sections: first, one sectionfor Portuguese, and second, a section for Spanish. To limitresource expenditure, we conducted the Portuguese study priorto the Spanish study, and adapted our procedure based on theinformation we learned from the Portuguese section.We addressed RQ1 in the first study in Portuguese. Generally speaking, our procedure had two steps: 1) the first author(a native speaker of Portuguese) created a goldset for sixPortuguese requirements documents, then 2) we compared theoutput of each variant on these documents, to the goldset. Fullprecision and recall values are reported in Section V – in brief,these results showed that V 2 performed better than V 1.For RQ2 , we recruited two native Portuguese-speakingprogrammers (not otherwise affiliated with the authors) toevaluate the output of the V 2 implementation for Portuguese.Likewise, we recruited four native Spanish-speaking programmers to evaluate the output of the Spanish V 2. Our procedurewas to: 1) build one survey containing the output of thetool (both variants) for each requirements document in eachlanguage, totaling six Portuguese surveys and six Spanishsurveys. Then, 2) the human evaluators completed each survey.A survey for one document showed each requirement,followed by four radio buttons indicating four choices forevery word that the tool highlighted as vague: Definitely NotVague, Partially Not Vague, Partially Vague, and DefinitelyVague. The choices were recorded as “scores” for each word:-1, -0.5, 0.5, and 1 respectively. We added the scores for eachword for all human evaluators. E.g., if two evaluators rated aword Definitely Vague, the word would receive a score of 2.

To answer RQ2 from these results, we created three “tiers”of words in the blacklist, based on a three-tiered evaluationprocedure used by Rodeghero et al. [28]. In one tier, weremoved words from the blacklist with scores less than orequal to -20. In a second tier, we removed words with scores-10 or less. In a third tier, we removed words with scores of0 or less. The size and contents of these tiers allows us toanswer RQ2 (see Section V and improve the performance ofour approach for the field studies (see Section VI).C. Subject MaterialsThe key subject materials in our pilot study were the Portuguese and Spanish requirements documents. We identified arange of requirements documents that have been released inthe public domain. The documents range in size from sevenrequirements up to 40 requirements. Since these requirementsare publicly available, we will provide copies at our onlineappendix for reproducibility post the acceptance of this paperin Section VI-B3.D. Threats to ValidityAs a pilot study, this section contains threats to validitynot suitable for a thorough evaluation – we caution thatour results were intended for calibration and exploration ina resource-restricted commercial environment. (For a morethorough evaluation see our field study in Section VI.) Thethree key threats to validity include the small number of humanevaluators, the limited set of requirements documents and thegoldsets created by the first author. It is possible that resultswould vary with different evaluators or requirements.V. PILOT STUDY RESULTSIn this section, we present our answer to RQ1 and RQ2 ,as well as our data and rationale. These answers are the basisfor the improvements of the field studies, which we present insection VI.TABLE ITABLE SHOWING THE AVERAGE VALUES OF PRECISION AND RECALL FORVARIANT V1 AND VARIANT V2.VariantV1V2Word Precision2.90%3.64%Word Recall11.93%32.20%Phrase Precision33.69%34.01%Phrase Recall90.56%94.17%A. RQ1 : Comparison of Variants V1 and V2For Portuguese we found evidence that variant V2 is betterthan variant V1 in terms of precision and recall. The datasupporting this finding is in Table III. Table III shows thatprecision and the recall of variant V2 is greater than theprecision and the recall of variant V1. Variant V2 has wordprecision of 3.64% and word recall of 32.20%, while variantV1 has lower word precision of 2.90% and lower word recallof 11.93%. For phrase detection variant V2 is also better thanvariant V1. Variant V2 has a phrase precision of 34.01% andphrase recall of 94.17%, while variant V1 has 33.69% forphrase precision and 90.56% for phrase recall. The superiorquality of variant V2 means that it detects correctly morevague words than variant V1.For Spanish, since none of the authors were native Spanishspeaker, it was not possible to create a gold set for gatheringprecision and recall. Therefore, we decided to follow thesame procedure described for Portuguese pilot study. Wecollected the words and comments from the questionnairesand applied in a similar aggressive way to create a similarset of configuration files. These files were later used for theSpanish field studies, available in section VI.B. RQ2 : Configuration SettingsFor Portuguese we found evidence that adding or removingterms to variant V2 blacklist and whitelist (list of terms thatare considered definitely not vague) can improve precisionand recall. Variant V2 has better precision than the originalsetting when using aggressive settings. Table IV contains thenumber of modifications made to the blacklist and whitelist.The least aggressive filter is tier 1, which contains one changeto the blacklist and one change to the whitelist. Note thatin the aggressive approach, tier 3, we added 78 words tothe whitelist and removed 21 from the blacklist more thantwice the combined amount from the other two tiers. Table Vshows that the values of precision and recall for the differenttiers. All tiers had better values of word precision and phraseprecision when compared to the original setting. The highestword precision 4.91% came from the aggressive approach.Even though Table V shows that tier 2 has greater values forTABLE IIITABLE SHOWING THE AVERAGE VALUES OF PRECISION AND RECALL FORVARIANT V1 AND VARIANT V2.VariantV1V2TABLE IITABLE WITH PRECISION AND RECALL FOR THE DIFFERENT TIERS FORP ORTUGUESE DOCUMENTS .SettingTier 1Tier 2Tier 3OriginalWord Precision4.23%4.85%4.91%3.64%Word Recall32.20%32.20%31.09%32.20%Phrase Precision34.52%35.70%35.12%34.01%Phrase Recall82.40%93.06%90.83%94.17%Word Precision2.90%3.64%Word Recall11.93%32.20%Phrase Precision33.69%34.01%Phrase Recall90.56%94.17%TABLE IVTABLE SHOWING THE NUMBER OF CHANGES TO P ORTUGUESE BLACKLISTAND WHITELIST ACCORDING TO TIER .BlacklistWhitelistaddedremovedaddedremovedTier 10110Tier 204160Tier 3021780

TABLE VTABLE WITH PRECISION AND RECALL FOR THE DIFFERENT TIERS FORP ORTUGUESE DOCUMENTS .SettingTier 1Tier 2Tier 3OriginalWord Precision4.23%4.85%4.91%3.64%Word Recall32.20%32.20%31.09%32.20%Phrase Precision34.52%35.70%35.12%34.01%Phrase Recall82.40%93.06%90.83%94.17%TABLE VITABLE SHOWING THE NUMBER OF CHANGES TO S PANISH BLACKLIST ANDWHITELIST ACCORDING TO TIER .BlacklistWhitelistaddedremovedaddedremovedTier 104190Tier 202150Tier 3861561phrase precision and phrase recall. We decided to use tier 3,because it incorporates comments and suggestions from thequestionnaires and because it has the best word precision.These suggestions and comments were not added to lowertiers, because, according to our rating system, they receivedvalues that fit the tier 3 configuration.For Spanish we found evidence that the Spanish configuration files will have differences from the Portuguese files.According to our findings, while using Spanish variant V2requires changes to both whitelist and blacklist that are notpresent in the Portuguese blacklist and whitelist. A case whereit is possible to see the difference between the Portuguesefilter and the Spanish Filter is in Table VI. The difference isnumber of additions to the Spanish whitelist for aggressiveconfiguration, tier 3, compared to the Portuguese, tier 3.The Spanish tier 3 whitelist received 156 additions, while inPortuguese tier 3 whitelist 78 additions. Table VI shows thenumber of alterations that each different Spanish tier received.Note that when using the aggressive filters, tier 3, it wasnecessary to make more changes to the original filters. Also,differently from Portuguese filters, for Spanish filters it wasnecessary to add words to the blacklist and remove words fromthe whitelist.C. Summary of the Pilot Study ResultsWe derived two results from our pilot study. First, variantV2 has better precision and recall, when compared to variantV1. We decided to use the aggressive set of filters, tier 3, asconfiguration files of variant V2, because it contains feedbacknot present in different tiers and it obtained the best overallword precision. Table IV shows the number of changes thatthe Portuguese tiers received. Table VI shows the changes toSpanish tiers. Table V shows variant V2 values from precisionand recall processing Portuguese documents. Our conclusionto the Pilot Study is that, when using variant V2 each languagerequires different alterations to its configuration files in orderto reach desirable precision and recall.VI. F IELD S TUDY E VALUATIONSThis section describes the research questions and methodology of three field studies of our approach: one at a BC(in Portuguese), and two at MC (one in Spanish and one inPortuguese). During the Pilot Study in Sections IV and V, wefound that V 2 had a higher level of performance than V 1. Inthese field studies, we evaluate V 2 in industry.A. Research QuestionsOur research objective is to evaluate V 2 in both Portugueseand Spanish, in an industrial setting. As a result of the pilotstudies, we found that even though V 2 outperformed V 1,there was evidence of further improvement after differentadjustments to the blacklist in each language. Therefore, wepose the following research questions:RQ3 What is the degree of difference in performanceof V 2 for the less aggressive (“original”) blacklistversus the more aggressive (“tier 3”) blacklist inPortuguese?RQ4 What is the degree of difference in performanceof V 2 for the less aggressive (“original”) blacklistversus the more aggressive (“tier 3”) blacklist inSpanish?RQ5 What is the degree of difference in performanceof V 2 for the less aggressive (“original”) blacklistversus the more aggressive (“tier 3”) blacklist at theMC for Portuguese and Spanish?RQ6 What is the degree of difference in performanceof V 2 for the less aggressive (“original”) blacklistversus the more aggressive (“tier 3”) blacklist at theBC?The rationale behind RQ3 and RQ4 is that the approach isintended for use in a multilingual environment, the processfor adapting V 2 to different languages involves modifyingthe blacklists via the pilo

Abstract—Vagueness in software requirements documents can lead to several maintenance problems, especially when the cus- . our work is a large multinational corporation which has an internally-developed tool that detects vagueness in require- . It isn't clear what constitutes a 'tall' person. If a person of 6 feet in height is .