Wednesday, December 16, 2009

WINTER STORM - intensive training session in language science

You are cordially invited to participate in Winter Storm!

Winter Storm is a FREE 2-week intensive training session in language science, covering the latest software and hardware techniques and language diversity research topics in the fields of cognitive neuroscience and computational/neural modeling.

Winter Storm takes place on campus Monday through Friday from 9-4 from January 11-22 (excluding MLK day). Daily activities include breakfast, morning seminars on hardware and software techniques, guest speaker lunch presentations, and research focus groups. Winter Storm is brought to you by U of Md's NSF-IGERT program in Biological and Computational Foundations of Language Diversity. This is an interdisciplinary program that brings together students and faculty from Computer Science, Electrical Engineering, Hearing & Speech, Human Development, Linguistics, NACS, Philosophy, Psychology, and Second Language Acquisition.

Winter storm is open to all interested participants. A full schedule will be sent out at a later date to everyone who signs up. Participants need not attend every session to participate. For more information or to sign-up, please contact Csilla Kajtar by January 1st. More information will also appear at languagescience.umd.edu.

Monday, December 7, 2009

Training course in the use of fMRI at the University of Michigan

The University of Michigan offers a two-week training course in the use of fMRI that includes instruction on the motivation for using fMRI, the physics that underlies the technique, the design of experiments, the acquisition of data, the analysis of those data, and the interpretation of brain activations that result. The course is open to faculty, postdocs, and graduate students and funding is available for travel and living expenses. More information can be found at the website below which contains application information along with lecture and lab notes as well as podcasts of the lectures from the 2009 version of the course.

More Info: http://sitemaker.umich.edu/fmri.training.course/home

Wednesday, November 25, 2009

Pedro Alcocer - Language Research in Brazil

Last August, Pedro Alcocer, third year PhD student in the Linguistics Department at UMD and second year fellow in the IGERT program, travelled to Rio de Janeiro, Brazil to run experiments that would shed light on how humans use memory in real time when comprehending language. This research can tell us more about how memory is structured and how search algorithms operate over that structure.

Pedro was hosted by the Universidade Federal do Rio de Janeiro where he worked in the labs of Profs. Aniela Improta França and Marcus Maia. During his five weeks in Rio - from August to September 2009 - he was accompanied by an undergraduate research assistant, Chris O'Brien from Michigan State University.

“Brazil was an excellent place to do this kind of research,” Pedro says, “because Brazilian Portuguese has a rather unique grammatical constraint on how it licenses null subjects that we can exploit to learn more about the structure of memory. Rio, in particular, was a good place to be because it is a center for psycholinguistic research in Brazil.”

Pedro’s trip was funded through the NFS-IGERT grant based in the Linguistics Department at UMD.

Tuesday, November 17, 2009

Brian Dillon and Candise Chen awarded NSF-EAPSI grants

Candise Chen (Human Development) and Brian Dillon (Linguistics) were awarded the NSF’s East Asia and Pacific Summer Institutes (EAPSI) award. EAPSI’s goal is to introduce U.S. graduate students to East Asia and Pacific science and engineering and foster future international collaborations. In addition to the $5,000 stipend, the award covers the round-trip ticket from the U.S. to the host country and housing in the host location. Students also benefit of a pre-departure orientation in the Washington, D.C. Area.
Candise Chen’s project was titled "Development of Prosodic Sensitivity in Young Chinese Children and Its Relation to Reading" and she spent 8 weeks in China from June to August 2009. Brian Dillon studied the "Memory Dynamics in the Processing of Chinese Anaphors" and he was hosted by the National Key Laboratory in Cognitive Neuroscience at Beijing Normal University in China.
The competition for the 2010 EAPSI award is now open; the deadline for submitting applications is December 8, 2009. More information is available at http://nsfsi.org/.

Akira Omaki (LING) Awarded NSF Grant

Akira Omaki, 5th year Linguistics PhD student, won an NSF grant in the amount of $11,966 with his proposal entitled "Commitment and flexibility in the developing parser". The grant covers his travel expense to Japan to conduct sentence processing research with Japanese children and adults so that he can compare language processing profiles in speakers of Japanese and English which significantly differ in their word orders.

This is how Akira describes his research: "Everybody acknowledges the importance of input in language learning. Most existing studies on input and language development tacitly assume that children can parse the input in an adult-like fashion but have surprisingly overlooked findings from recent child parsing research that shows that children often misanalyze adults' utterances. This begs for investigations of what children actually understand with their immature parser and how it might skew the distributional properties in the input. To address this question, I am investigating 5-year-olds' wh-dependency processing, and more specifically, a) whether they process wh-dependencies 'actively' like adults and temporarily entertain incorrect analyses, b) whether they can recover from the misanalyses caused by active processing, and c) whether the 'effective' input distribution that is skewed by the immature parsers can predict the course of learning of wh-constructions more accurately than the 'true' input distribution from an adult's perspective. The research uses i) a visual-world eye-tracking study and two types of story-based comprehension paradigms (Question-after-Story, Truth Value Judgment) to examine the time course of wh-dependency processing and reanalysis in Japanese and English, as well as ii) a CHILDES corpus analysis to examine what proportion of wh-dependencies is likely to cause misanalyses."

Monday, November 16, 2009

"How we do what we want: An ideomotor approach to voluntary action." Bernhard Hommel

Date: November 19, 2009
Time: 3:30-5:30
Location: BRB 1103
Bernhard Hommel (Psychology, Leiden) Title: How we do what we want: An ideomotor approach to voluntary action. Abstract: Voluntary action is anticipatory and, hence, must depend on associations between actions and their perceivable effects. This talk provides an overview of recent behavioral, electrophysiological, and imaging work from our lab on the acquisition and functional role of action-effect associations in infants, children, and adults. It shows that action effects are acquired from very early on and are still integrated spontaneously in adults. Once acquired, action effects serve to select actions by means of a network including the (developing) frontal cortex/SMA, connecting via hippocampus to the perceptual areas that code for sensory action effects. However, the impact and role of action-effect codes are regulated by the agent's processing mode and intentions.

Friday, November 13, 2009

"Teaching the Web to Speak and Be Understood," Dr. Jeffrey P. Bigham, University of Rochester

Tuesday, November 17, 2009 @ 4-5 PM
Room 2116 Hornbake Building, South Wing
"Teaching the Web to Speak and Be Understood "
Dr. Jeffrey P. Bigham, University of Rochester
Tuesday, 4-5 pm, HBK 2116

Abstract:

In this talk I'll describe my efforts to teach the web speak and be understood in order to improve web access for blind people.

The web is an unparalleled information resource, but remains difficult and frustrating to use for millions of blind and low vision people. My work attempts to achieve effective personalized access for blind web users with applications that benefit all users, even sighted ones. I'll discuss the following projects to demonstrate how: (i) Usable CAPTCHAs dramatically improve the success rate of blind users on CAPTCHA problems and illustrate the potential of improving an individual interaction, (ii) TrailBlazer helps users efficiently connect interactions together by predicting what users might want to do next, and (iii) WebAnywhere adds speech output to any web page without installing new software, even on locked-down public terminals. These projects have made significant advances in web accessibility and usability for blind web users, and yielded general lessons applicable for adapting, personalizing, and delivering better content to all users.

Moving forward, I'm exploring projects that take crowdsourcing accessibility beyond the web and into the real world. Mobile phones with cameras, GPS, microphones, and other sensors are ubiquitous. How can we provide tools that let blind people use their phones to make better sense of their visual environments in the real world? I'll describe early successes in this space achieved by using these sensors to connect people with remote workers and outline a number of usability challenges that need to be addressed to fully realize this potential.

About the speaker:

Jeffrey P. Bigham is an Assistant Professor in the Department of Computer Science at the University of Rochester and currently a Visiting Scientist at MIT CSAIL. Jeffrey received his B.S.E degree in Computer Science in 2003 from Princeton University, and his M.Sc. and Ph.D. degrees both in Computer Science and Engineering from the University of Washington in 2005 and 2009, respectively. His work centers on web adaptation and automation, with a specific focus on how to enable blind people and others to collaboratively improve their own web experiences. For his work, he has won numerous awards, including two ASSETS Best Student Paper Awards, the Microsoft Imagine Cup Accessible Technology Award, the Andrew W. Mellon Foundation Award for Technology Collaboration, and Technology Review’s Top 35 Innovators Under 35 Award.

Foreign Subtitles Improve Speech Perception

ScienceDaily (Nov. 11, 2009) — Do you speak English as a second language well, but still have trouble understanding movies with unfamiliar accents, such as Brad Pitt's southern accent in Quentin Tarantino's Inglourious Basterds? In a new study, published in the open-access journal PLoS One, Holger Mitterer (Max Planck Institute for Psycholinguistics) and James McQueen (MPI and Radboud University Nijmegen) show how you can improve your second-language listening ability by watching the movie with subtitles -- as long as these subtitles are in the same language as the film. Subtitles in one's native language, the default in some European countries, may actually be counter-productive to learning to understand foreign speech.Mitterer and McQueen show that listeners can tune in to an unfamiliar regional accent in a foreign language. Dutch students showed improvements in their ability to recognise Scottish or Australian English after only 25 minutes of exposure to video material. English subtitling during exposure enhanced this learning effect; Dutch subtitling reduced it.

In the study, Dutch students who were unfamiliar with Scottish and Australian English watched either an episode of the Australian sitcom Kath & Kim or a shortened version of Trainspotting, which depicts a Scottish drug addict, Renton, and his friends -- with English subtitles, Dutch subtitles or no subtitles. After this exposure, participants were asked to repeat back as many words as they could from 80 audio excerpts taken from each source spoken by the main characters (Kath from Kath & Kim; Renton from Trainspotting), half of which had already been heard by the participants in the extracts and half were new to the participants (from a different Kath & Kim episode or from a part of Trainspotting that was edited out).

The researchers found that English subtitles were associated with the best performance on both previously heard and new material but although Dutch subtitles also enhanced performance on the old items, they led to a worse performance on the new materials. The participants seemed to be using the semantic (meaning-based) information in the Dutch subtitles when listening to the English speech and so the Dutch subtitles appear to have helped the participants to decipher which English words had been uttered, as seen in the improved recognition of previously heard materials. This did not, however, allow participants to retune their phonetic categories so as to improve their understanding of new utterances from the same speaker.

Listeners can use their knowledge about how words normally sound to adjust the way they perceive speech that is spoken in an unfamiliar way. This seems to happen with subtitles too. If an English word was spoken with a Scottish accent, English subtitles usually told the perceiver what that word was, and hence what its sounds were. This made it easier for the students to tune in to the accent. In contrast, the Dutch subtitles did not provide this teaching function, and, because they told the viewer what the characters in the film meant to say, the Dutch subtitles may have drawn the students' attention away from the unfamiliar speech.

These findings also have educational implications. Since foreign subtitles seem to help with adaptation to foreign speech in adults, they should perhaps be used whenever available (e.g. on a DVD) to boost listening skills during second-language learning. Moreover, since native-language subtitles interfere with this kind of learning, such subtitles in television programmes should be made optional for the viewer.

This work was funded by the Max-Planck-Gesellschaft zur Förderung der Wissenschaften.



Journal reference:

  1. Mitterer et al. Foreign Subtitles Help but Native-Language Subtitles Harm Foreign Speech Perception.PLoS ONE, 2009; 4 (11): e7785 DOI:10.1371/journal.pone.0007785
Adapted from materials provided by Public Library of Science, via EurekAlert!, a service of AAAS.
Source: http://www.sciencedaily.com/releases/2009/11/091110202847.htm

New Brain Findings On Dyslexic Children: Good Readers Learn From Repeating Auditory Signals, Poor Readers Do Not

ScienceDaily (Nov. 12, 2009) — The vast majority of school-aged children can focus on the voice of a teacher amid the cacophony of the typical classroom thanks to a brain that automatically focuses on relevant, predictable and repeating auditory information, according to new research from Northwestern University. But for children with developmental dyslexia, the teacher's voice may get lost in the background noise of banging lockers, whispering children, playground screams and scraping chairs, the researchers say. Their study appears in the Nov. 12 issue of Neuron.

Recent scientific studies suggest that children with developmental dyslexia -- a neurological disorder affecting reading and spelling skills in 5 to 10 percent of school aged children -- have difficulties separating relevant auditory information from competing noise.

The research from Northwestern University's Auditory Neuroscience Laboratory not only confirms those findings but presents biological evidence that children who report problems hearing speech in noise also suffer from a measurable neural impairment that adversely affects their ability to make use of regularities in the sound environment.

"The ability to sharpen or fine-tune repeating elements is crucial to hearing speech in noise because it allows for superior 'tagging' of voice pitch, an important cue in picking out a particular voice within background noise," said Nina Kraus, Hugh Knowles Professor of Communication Sciences and Neurobiology and director of the Auditory Neuroscience Laboratory.

In the article "Context-dependent encoding in the human auditory brainstem relates to hearing speech-in-noise: Implications for developmental dyslexia," Kraus and co-investigators Bharath Chandrasekaran, Jane Hornickel, Erika Skoe and Trent Nicol demonstrate that the remarkable ability of the brain to tune into relevant aspects in the soundscape is carried out by an adaptive auditory system that continuously changes its activity based on the demands of context.

Good and poor readers were asked to watch a video while the speech sound "da" was presented to them through an earphone in two different sessions during which the brain's response to these sounds was continuously measured.

In the first session, "da" was repeated over and over and over again (in what the researchers call a repetitive context). In the second, "da" was presented randomly amid other speech sounds (in what the researchers call a variable context). In an additional session, the researchers performed behavioral tests in which the children were asked to repeat sentences that were presented to them amid increasing degrees of noise.

"Even though the children's attention was focused on a movie, the auditory system of the good readers 'tuned in' to the repeatedly presented speech sound context and sharpened the sound's encoding. In contrast, poor readers did not show an improvement in encoding with repetition," said Chandrasekaran, lead author of the study. "We also found that children who had an adaptive auditory system performed better on the behavioral tests that required them to perceive speech in noisy backgrounds."

The study suggests that in addition to conventional reading and spelling based interventions, poor readers who have difficulties processing information in noisy backgrounds could benefit from the employment of relatively simple strategies, such as placing the child in front of the teacher or using wireless technologies to enhance the sound of a teacher's voice for an individual student.

Interestingly, the researchers found that dyslexic children showed enhanced brain activity in the variable condition. This may enable dyslexic children to represent their sensory environment in a broader and arguably more creative manner, although at the cost of the ability to exclude irrelevant signals (e.g. noise).

"The study brings us closer to understanding sensory processing in children who experience difficulty excluding irrelevant noise. It provides an objective index that can help in the assessment of children with reading problems," Kraus says.

For nearly two decades, Kraus has been trying to determine why some children with good hearing have difficulties learning to read and spell while others do not. Early in her work, because the deficits she was exploring related to the complex processes of reading and writing, Kraus studied how the cortex -- the part of the brain responsible for thinking --encoded sounds. She and her colleagues now understand that problems associated with the encoding of sound also can occur in lower perceptual structures.


Adapted from materials provided by Northwestern University, via EurekAlert!, a service of AAAS.
Source: http://www.sciencedaily.com/releases/2009/11/091111123600.htm

Thursday, November 5, 2009

Learning Correspondence Representations for Natural Language Processing, John Blitzer

John Blitzer, postdoc at University of California, Berkeley, will be giving a talk
on Friday, November 6, 2009 at 10:00 a.m. in room 2120 AVW.

TITLE:

Learning Correspondence Representations for Natural Language Processing

ABSTRACT: The key to creating scalable, robust natural language
processing (NLP) systems is to exploit correspondences between known
and unknown linguistic structure. Natural language processing has
experienced tremendous success over the past two decades, but our most
successful systems are still limited to the domains and languages
where we have large amounts of hand-annotated data. Unfortunately,
these domains and languages represent a tiny portion of the total
linguistic data in the world. No matter the task, we always encounter
unknown linguistic features like words and syntactic constituents that
we have never observed before when estimating our models. This talk
is about linking these linguistic features to one another through
correspondence representations.

The first part describes a technique to learn lexical correspondences
for domain adaptation of sentiment analysis systems. These systems
predict the general attitude of an essay toward a particular topic.
In this case, words which are highly predictive in one domain may not
be present in another. We show how to build a correspondence
representation between words in different domains using projections to
low-dimensional, real-valued spaces. Unknown words are projected onto
this representation and related directly to known features via
Euclidean distance. The correspondence representation allows us to
train significantly more robust models in new domains, and we achieve
a 40% relative reduction in error due to adaptation over a
state-of-the-art system.

The second part describes a technique to learn syntactic
correspondences between languages for machine translation. Syntactic
machine translation models exploit syntactic correspondences to
translate grammatical structures (e.g. subjects, verbs, and objects)
from one language to another. Given pairs of sentences which are
translations of one another, we build a latent correspondence grammar
which links grammatical structures in one language to grammatical
structures in another. The syntactic correspondences induced by our
grammar significantly improve a state-of-the-art Chinese-English
machine translation system.

BIO: John Blitzer is a postdoctoral fellow in the computer science
department at the University of California, Berkeley, working with Dan
Klein. He completed his PhD in computer science at the University of
Pennsylvania under Fernando Pereira, and in 2008 spent 6 months as a
visiting researcher in the natural language computing group at
Microsoft Research Asia. John's research focuses on applications of
machine learning to natural language. In particular, he is interested
in exploiting unlabeled data and other sources of side information to
improve supervised models. He has applied these techniques to
tagging, parsing, entity recognition, web search, and machine
translation. More info on John's research interests is available at
http://john.blitzer.com.


NACS Colloquium Series: Dr. Sarah Bottjer

Subject : NACS Colloquium Series: Dr. Sarah Bottjer

When : Friday, November 06, 2009 10:15 AM - 11:15 AM

Where : 1103 Bioscience Research Building

Event Type(s) : Colloquium


"Neural Substrates for Vocal Learning during the Sensitive Period in Songbirds"


Sarah Bottjer, Ph.D.

Professor of Neurobiology

University of Southern California


Website: www.nacs.umd.edu/calendar/index.cfm


For more information, contact:

Pam Komarek

Neuroscience and Cognitive Science Program

+1 301 405 8910

pkomarek@umd.edu

www.nacs.umd.edu

Thursday, Nov 5: Cognitive Science Colloquium

Subject : Cognitive Science Colloquium

When : Thursday, November 05, 2009 3:30 PM - 5:30 PM

Where : Bioscience Research Building : 1103

Event Type(s) : Colloquium


Judy DeLoache (Psychology, Virginia)


Title: Becoming Symbol-Minded


Abstract: Every society has a wealth of symbols and symbol systems that support cognition and communication, and all children must master a variety of symbolic artifacts to participate fully in their society. My research shows that in the course of learning to use various symbolic representations—including pictures, models, and replica objects—infants and young children experience a surprising amount of difficulty. They often fail to note the distinction between symbols and their referents, behaving toward symbolic artifacts as if they were what they stand for. The extended process of becoming symbol-minded begins in the first year of life, as infants start to learn about the nature of pictures: Through experience, they discover both what pictures are and what they are not. Slightly older children have substantial difficulty understanding and using scale models, but rapidly come to appreciate the nature and use of this type of symbol. At the same time, very young children make dramatic errors in which they try to interact with a miniature representational artifact as if it were its larger counterpart. Mastery of these different types of symbolic objects involves developmental progress in multiple domains.

Thursday of next week: discussion of two papers by Bernhard Hommel (Psychology, Leiden), which are accessible here:
http://www.philosophy.umd.edu/Faculty/pcarruthers/cog-sci.htm


For more information, contact:

Peter M. Carruthers

+1 301 405 5705

pcarruth@umd.edu


Tuesday, November 3, 2009

2009 Language Fair - PG Room, Stamp Student Union

Subject : 2009 Language Fair--PG Room, Stamp Student Union

When : Monday, November 02, 2009 10:00 AM - 2:30 PM

Where : Stamp Student Union : Prince George's Room

Event Type(s) : Special Event


2009 Language Career Fair

Tuesday, November 03, 2009 * 10:00AM - 02:00PM

Location: Prince George's Room, Stamp Student Union (Location Change)


University of Maryland's 2009 Language Career Fair, co-sponsored by The University of Maryland's University Career Center and The President's Promise and The School of Languages, Literatures, and Cultures. This career fair is an excellent opportunity to for University of Maryland undergraduate and graduate students seeking full-time, part-time, and internship positions in a variety of fields that value language skills to network with a diverse range of organizations. Participating organizations will be able to share information with students about their programs, organizations and potential careers. Student participants will come from all majors and/or minors and are eager to utilize their language skills in a variety of industries. This event is open to all UMD students


Website: www.careercenter.umd.edu


For more information, contact:

Stacey Hazel Brown

University Career Center and The President's Promise

+1 301 314 7241

sbrown12@umd.edu

www.careercenter.umd.edu

Monday, November 2, 2009

Lecture: "Principle C in Adult and Child Thai," Kamil Ud Deen of the University of Hawaii at Manoa

We're pleased to announce that this week's speaker in our 2009-2010 colloquium series will be Kamil Ud Deen of the University of Hawaii at Manoa, with a talk entitled "Principle C in Adult and Child Thai" (abstract follows).


Due to the large number of people leaving to attend BUCLD this coming weekend (including the speaker), the talk will be at a special time:


Wednesday, Nov 4, 12:00 PM

1108 Marie Mount Hall


We hope to see you there! --Colloquium Committee


Principle C in Adult and Child Thai


Children in a wide range of languages show evidence of Principle C at the earliest testable ages (Crain & McKee, 1985; Kazanina & Phillips, 2001; Lukyanenko et.al., 2008; amongst many others). This has been taken as evidence that Principle C is a universal, innate principle of grammar (Crain, 1991). However, Thai is a language in which Principle C appears to be violable, at least in certain contexts. This could be taken as initial evidence that Principle C is in fact not part of Universal Grammar, but is learned from experience (or in the case of Thai, not learned because the experience does not contain Principle C). Moreover, if Principle C is learned from the input, then we predict that Thai children will never show evidence of Principle C, and will thus pattern like Thai adults from the earliest testable ages. But if Principle C is part of UG, Thai children should initially show evidence of Principle C in contexts that adults do not, only to 'unlearn' this over time. In this talk, I first describe the contexts in which Principle C applies in adult Thai. I then show that Thai children overgeneralize Principle C: they disallow a coindexed reading in contexts where adults allow it. This shows that Principle C is innately specified, and fully available to Thai children at early ages.

Thai adults apply Principle C in contexts where the relevant R- expressions are full DPs (modified by a classifier or a demonstrative), but not when the relevant R-expressions are unmodified by classifiers/demonstratives (Larson, 2005), as in (1).

(1)

DP: [Elephant CL big]_i said that [elephant CL big]_*i / j won the

Competition

Non-DP: Elephant_i said that elephant_i won the competition

We tested 66 Thai children (aged 4;5-6;2) on two Truth Value Judgment Tasks (Crain & McKee, 1985), and the results reveal that (i) Thai children, like Thai adults, adhere to Principle C in the DP condition, rejecting a co-indexed reading when the two R-expressions are full DPs; (ii) Thai children, *unlike* Thai adults, also adhere to Principle C in the non-DP context, rejecting the coindexed reading in contexts that adults accept it. Thus Thai children show evidence of Principle C not only at the earliest testable ages, but in contexts in which Principle C does not apply in the adult language. Taken together, these facts provide strong and novel evidence that Principle C is specified by Universal Grammar and is available to children even in conditions in which the input is variable and potentially inconsistent with Principle C.

Phonology & General Linguistics: Assistant Professor, University of Missouri - Columbia

University of Missouri - Columbia
Phonology and General Linguistics: Assistant Professor, University of Missouri - Columbia

Rank: Assistant Professor
Deadline: Until filled
Ling Fields: General Linguistics - Phonology
Department: English Department

Job Description: University of Missouri: The English Department at MU seeks an assistant professor (tenure track) in linguistics with a specialization in phonology. PhD in linguistics or related field preferred.
Appointment begins 15 August 2010.

The successful candidate will complete a three-member faculty team in the English department, teaching courses that contribute to an interdisciplinary major in linguistics and to a linguistics component of our English degrees.

MU is the state's flagship university. We offer generous research leaves for faculty, and the standard teaching schedule is two courses per semester. Send letter of application and cv to Patricia Okker at the application address listed below. No electronic applications will be accepted.
Preference given to applications received by 2 November. Applications will be acknowledged by department letter. The University of Missouri is an EOE/AA/ADA employer.

Application Address:
Patricia Okker, Professor and Chair
University of Missouri - Columbia
Department of English
Columbia, MO 65211
USA

Contact information:
Sharon Black
blacksa@missouri.edu
Phone: 573-882-6066

Thursday, October 29, 2009

Structure and Knowledge in Natural Language Processing

Harold Daume III
Assistant Professor at University of Utah will be giving a talk on Tuesday, November 3, 2009 at 10:00 a.m. in room 2460.

Title: Structure and Knowledge in Natural Language Processing

Abstract:
Human language exhibits complex structure. To be successful, machine learning approaches to language-related problems must be able to take advantage of this structure. I will discuss several investigations into the relationship between structure and learning, which have led to some surprising conclusions about the role that structure plays in language processing. From there, I will consider the question of: where does this structure come from. By taking insights from linguistic typology, I will show that very simple typological information can lead to significant increases in system performance for some simple syntactic problems. Moreover, I will show how this typological information can be mined from raw data.

(This talk includes joint work with Dan Klein, John Langford, Percy Liang, Daniel Marcu, and some of my students: Arvind Agarwal, Adam Teichert and Piyush Rai.)

Cognitive Science Colloquium

There will be a discussion of two recent papers by Judy DeLoache at 3.30 pm tomorrow, Thursday October 29, in Bioscience Research Building 1103.


The papers can be accessed from the discussion link on the Colloquium website at:

http://www.philosophy.umd.edu/Faculty/pcarruthers/cog-sci.htm

Judy DeLoache will then visit the colloquium next week. Those wishing to meet with her should email <pcarruth@umd.edu>


Peter Carruthers

www.philosophy.umd.edu/Faculty/pcarruthers/

Professor, Department of Philosophy, University of Maryland

1122B Skinner Building, College Park, MD 20742, USA

Tel. (office): 301 405 5705

Tel. (home): 301 270 5107

Tuesday, October 27, 2009

The biological foundations of language: Insights from sign language

Subject: The Annual Blackwell/Maryland Lectures Series
When: Wednesday, November 11, 2009 3:00 PM - Friday, November 13, 2009 12:00 PM
Where: Marie Mount Hall, Maryland Room
Event Type(s) : Lecture

Series title: "The biological foundations of language: Insights from sign language"

Signed languages provide a powerful tool for investigating the nature of human language and language processing, the relation between cognition and language, and the neural organization for language.

Lecture 1 Nov. 11 from 3 to 6pm in the Md room
Sign language and the brain

Lecture 2 Nov. 12 from 3 to 6pm in the Md room
Speaking vs. signing: How the biology of linguistic expression affects language processing

Lecture 3 Nov. 13, 2009 from 10am to 12pm in the Md room
Bimodal bilingualism

FFI see website: www.ling.umd.edu/

Website: www.ling.umd.edu

For more information, contact:
Kathleen M. Faulkingham
Linguistics Dept.
+1 301 405 7002
kathif@umd.edu
www.ling.umd.edu

Monday, October 26, 2009

Ivano Caponigro colloquium

I'm happy to announce that Ivano Caponigro from UCSD is giving a colloquium talk this Friday 10/30 at 2PM in MMH1304. The title of the talk is 'Ask, and Tell as Well: Question-Answer Clauses in American Sign Language'.


Abstract:


A construction is found in American Sign Language that we call a Question-Answer Clause. It is made of two parts: the first part looks like an interrogative clause conveying a question, while the second part resembles a declarative clause that can be used to answer that question. The very same signer has to sign both, and the entire construction is interpreted as truth-conditionally equivalent to a declarative sentence. In this talk, we discuss these and other properties of Question-Answer Clauses and provide a syntactic, semantic and pragmatic account. In particular, we argue that Question-Answer Clauses are copular clauses consisting of a silent copula of identity connecting an interrogative clause in the precopular position with a declarative clause in the postcopular position. Pragmatically, they instantiate a topic/comment structure, with the first part expressing a sub-question under discussion and the second part expressing the answer to that sub-question. We discuss broader implications of our analysis for the Question Under Discussion Theory of discourse-structuring, for a popular analysis of pseudoclefts in spoken languages, and for recent proposals about the existence of exhaustivity operators in the grammar and the consequences for the syntax/semantics/pragmatics interface.

Friday, October 23, 2009

IGERT Lunch Talk: Emotionally arousing language: The effects of emotional interference in L1 and L2

When: October 29, 12pm - 1pm

Where: 1108B Marie Mount Hall, Linguistics Department


Susan Teubner-Rhodes will present: "Emotionally arousing language: The effects of emotional interference in L1 and L2."


She will be discussing the time course of the Emotional Stroop task and disparate effects of top down control on the processing of emotional language in a native versus a second language.


Susan is a second year PhD student in the Psychology department, the Program in Neuroscience and Cognitive Science at UMD.

Mark Liberman: "A New Golden Age of Phonetics"

CENTER FOR LANGUAGE AND SPEECH PROCESSING
Fall 2009 Seminar Series

Mark Liberman, "A New Golden Age of Phonetics"
University of Pennsylvania

Tuesday, October 27, 2009, 4:30 p.m.
Computational Science and Engineering Building, room B17

From the perspective of a linguist, today's vast archives of digital text and speech, along with new analysis techniques from language engineering, look like a wonderful new scientific instrument, a modern equivalent of the 17th-century invention of the telescope and microscope. We can now observe linguistic patterns in space, time, and cultural context, on a scale three to five orders of magnitude greater than in the past, and simultaneously in much greater detail than before. Scientific use of these new instruments remains mainly potential, especially in phonetics and related disciplines, but the next decade is likely to be a new "golden age" of research. This talk will discuss some of the barriers to be overcome, present some successful examples, and speculate about future directions.

BIOGRAPHY

Biographical information for Mark Liberman is available from
http://ling.upenn.edu/~myl <http://ling.upenn.edu/%7Emyl> .

DIRECTIONS:
http://www.clsp.jhu.edu/about/directions
UPCOMING TALKS: http://www.clsp.jhu.edu/seminars


Nov 3 Mirella Lapata (U of Edinburgh): Vector-based Models of Semantic Composition
Nov 10 Oren Etzioni (U of Washington): We KnowItAll: Lessons from a Quarter Century of Web Extraction Research

Thursday, October 22, 2009

Cognitive Science Colloquium: Evaluating Faces on Social Dimensions

Subject : Cognitive Science Colloquium

When : Thursday, October 22, 2009 3:30 PM - 5:30 PM

Where : Bioscience Research Building : 1103

Event Type(s) : Colloquium


Today: Alexander Todorov (Psychology, Princeton), "Evaluating Faces on Social Dimensions".

For an abstract of the talk, see the Colloquium website.

Next Thursday (same time and place): a discussion of two articles by Judy Deloache (Psychology, Virginia), who visits the colloquium the week after.


Website: www.philosophy.umd.edu/Faculty/pcarruthers/cog-sci.htm


For more information, contact:

Peter M. Carruthers

+1 301 405 5705

pcarruth@umd.edu

Tuesday, October 20, 2009

Lunch talk: "Machine learning of phonological categories

When: Thursday, October 22nd
Where: Marie Mount Hall, room 1108B
Lunch will be provided.

Brian Dillon, Ewan Dunbar, and Bill Idsardi will discuss machine learning of phonological categories.

PhD Completion Project: Teaching Portfolios

The Graduate School at University of Maryland presents:
PhD Completion Project Workshops: Teaching Portfolios

Spencer Benson,
Director
Center for Teaching Excellence

David Eubanks
Assistant Director
Center for Teaching Excellence


Friday, October 16, 2009

Lecture Hall 0200, Skinner Building
3:00 to 5:00 p.m.

This workshop will provide information about developing a professional teaching portfolio. Topics will include creating your statements of teaching philosophy and teaching experience and items to incorporate into the portfolio.

For registration and additional information, please visit: http://www.gradschool.umd.edu/grrd/workshops. <http://www.gradschool.umd.edu/grrd/workshops>


Questions: 301.405.4180 or retention@gradschool.umd.edu

Talk:Word order development in English and Norwegian: Micro-cues, information structure and economy

On November 9, at 11:30, in the upstairs conference room of the Linguistics Department (Marie Mount Hall) professor of linguistics and director of CASTL at the University of Tromsø, Marit Westergaard, will be giving a talk. Lunch will be provided. The abstract is included below.
Word order development in English and Norwegian: Micro-cues, information structure and economy

Marit Westergaard
University of Tromsø – CASTL


Abstract
This paper considers the loss of verb second (V2) word order in the history of English and present-day Norwegian dialects with a particular focus on the question why it survives in certain contexts. I argue against a parametric approach to V2 word order and classify both English and Norwegian as mixed V2 grammars, i.e. grammars which require V2 in some contexts and non-V2 in others. Within an approach to language acquisition and change that is based on the existence of micro-cues in children’s I-language grammars, some acquisition data are considered, showing that mixed V2 systems are easily learnable. Discussing some historical data from Old and Middle English (OE/ME) as well as synchronic variation in Norwegian, I argue that the choice between the two word orders is due to a productive syntactic rule which is sensitive to information structure. The loss of this rule as well as the survival of certain remnant cases are discussed in relation to processes in first language acquisition.

Monday, October 19, 2009

First post

This is the first and test post to the Languagescience News blog.