Recursos > language processing

    sortFiltrar Ordenar
    4 resultados

    Pdf

    /

    Publicado el 1.10.2018 por Equipo GNOSS

    Artificial Intelligence and Life in 2030. Stanford University

    "Artificial Intelligence and Life in 2030" One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA,  September 2016. Doc: http://ai100.stanford.edu/2016-report. Accessed:  September 6, 2016.

    Executive Summary. Artificial Intelligence (AI) is a science and a set of computational technologies that are inspired by—but typically operate quite differently from—the ways people use their nervous systems and bodies to sense, learn, reason, and take action. While the rate of progress in AI has been patchy and unpredictable, there have been significant advances since the field's inception sixty years ago. Once a mostly academic area of study, twenty-first century AI enables a constellation of mainstream technologies that are having a substantial impact on everyday lives. Computer vision and AI planning, for example, drive the video games that are now a bigger entertainment industry than Hollywood. Deep learning, a form of machine learning based on layered representations of variables referred to as neural networks, has made speech-understanding practical on our phones and in our kitchens, and its algorithms can be applied widely to an array of applications that rely on pattern recognition. Natural Language Processing (NLP) and knowledge representation and reasoning have enabled a machine to beat the Jeopardy champion and are bringing new power to Web searches.

    While impressive, these technologies are highly tailored to particular tasks. Each application typically requires years of specialized research and careful, unique construction. In similarly targeted applications, substantial increases in the future uses of AI technologies, including more self-driving cars, healthcare diagnostics and targeted treatments, and physical assistance for elder care can be expected. AI and robotics will also be applied across the globe in industries struggling to attract younger workers, such as agriculture, food processing, fulfillment centers, and factories. They will facilitate delivery of online purchases through flying drones, self-driving trucks, or robots that can get up the stairs to the front door.

    This report is the first in a series to be issued at regular intervals as a part of the One Hundred Year Study on Artificial Intelligence (AI100). Starting from a charge given by the AI100 Standing Committee to consider the likely influences of AI in a typical North American city by the year 2030, the 2015 Study Panel, comprising experts in AI and other relevant areas focused their attention on eight domains they considered most salient: transportation; service robots; healthcare; education; low-resource communities; public safety and security; employment and workplace; and entertainment. In each of these domains, the report both reflects on progress in the past fifteen years and anticipates developments in the coming fifteen years. Though drawing from a common source of research, each domain reflects different AI influences and challenges, such as the difficulty of creating safe and reliable hardware (transportation and service robots), the difficulty of smoothly interacting with human experts (healthcare and education), the challenge of gaining public trust (low-resource communities and public safety and security), the challenge of overcoming fears of marginalizing humans (employment and workplace), and the social and societal risk of diminishing interpersonal interactions (entertainment). The report begins with a reflection on what constitutes Artificial Intelligence, and concludes with recommendations concerning AI-related policy. These recommendations include accruing technical expertise about AI in government and devoting more resources—and removing impediments—to research on the fairness, security, privacy, and societal impacts of AI systems.

    Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future. Instead, increasingly useful applications of AI, with potentially profound positive impacts on our society and economy are likely to emerge between now and 2030, the period this report considers. At the same time, many of these developments will spur disruptions in how human labor is augmented or replaced by AI, creating new challenges for the economy and society more broadly. Application design and policy decisions made in the near term are likely to have long-lasting influences on the nature and directions of such developments, making it important for AI researchers, developers, social scientists, and policymakers to balance the imperative to innovate with mechanisms to ensure that AI's economic and social benefits are broadly shared across society. If society approaches these technologies primarily with fear and suspicion, missteps that slow AI's development or drive it underground will result, impeding important work on ensuring the safety and reliability of AI technologies. On the other hand, if society approaches AI with a more open mind, the technologies emerging from the field could profoundly transform society for the better in the coming decades.

    Study Panel: 

    Peter Stone, Chair, University of Texas at Austin
    Rodney Brooks, Rethink Robotics
    Erik Brynjolfsson, Massachussets Institute of Technology
    Ryan Calo, University of Washington
    Oren Etzioni, Allen Institute for AI
    Greg Hager, Johns Hopkins University
    Julia Hirschberg, Columbia University
    Shivaram Kalyanakrishnan, Indian Institute of Technology Bombay
    Ece Kamar, Microsoft Research
    Sarit Kraus, Bar Ilan University
    Kevin Leyton-Brown, University of British Columbia
    David Parkes, Harvard University
    William Press, University of Texas at Austin
    AnnaLee (Anno) Saxenian, University of California, Berkeley
    Julie Shah, Massachussets Institute of Technology
    Milind Tambe, University of Southern California
    Astro Teller, X

     

     

    ...

    Página Web

    /

    Publicado el 28.2.2012 por Equipo GNOSS

    Extended Semantic Web Conference 2012 (ESWC 2012)

    La conferencia The Extended Semantic Web Conference 2012 (ESWC 2012) tendrá lugar en  Heraklion, Creta (Grecia)  entre el 27 y el 31 de Mayo del 2012. Se trata de una conferencia clave para discutir los últimos resultados científicos y las innovaciones tecnológicas en torno a las tecnologías semánticas.

    Entre los temas de interés de la conferencia podemos incluir:

    • Artificial Intelligence
    • Natural Language Processing
    • Database and Information Systems
    • Information Retrieval, Machine Learning Multimedia
    • Distributed Systems
    • Social Networks
    • Web Engineering
    • Web Science

    ...

    Página Web

    /

    Compartido el 7.5.2009 por Ricardo Alonso Maturana

    El buscador Lexxe es un buscador basado en el procesamiento del lenguaje natural (tipo de buscadores que también se denominan "buscadores de tercera generación") que proporciona respuestas a preguntas concretas formuladas como tales, en lugar de con palabras clave. Cuando realizas una búsqueda en Lexxe en inglés en primer lugar te da una respuesta a tu pregunta. Por ejemplo: para la pregunta "Who is obama?" te ofrece la siguiente respuesta: "Answer: [1] Barack Hussein Obama II (; born August 4, 1961) is the junior United States Senator from Illinois and presidential nominee of the Democratic Party ... [2] An African (Luo) surname".
    Además, junto con la respuesta concreta, ofrece un conjunto de links que considera como resultados significativos en los que se puede encontrar la respuesta y crea grupos o categorías (cluster) para ordenar dichos enlaces. Puede considerarse que Lexxe funciona más o menos bien con preguntas sencillas de respuesta sencilla, pero cuando la cosa se complica puede que Lexxe no te de una respuesta adecuada.
    En español no parece poder calcular las respuestas concretas ni hacer búsquedas muy ajustadas, aunque sí realiza una propuesta de links y cluster que los agrupan.
    En los siguientes enlaces pueden leerse algunas valoraciones e información sobre este buscador:
    - Lexxe: Search Engine that Answers Queries, por Arun Radhakrishnan en Search Engine Journal.
    - Lexxe natural language search reviewed, en Panda Search Engine News.
    - Retrospective: My Day Without Google Fails to Impress, en Read Write Web (donde también habla del buscador Powerset).
    - Lexxe Search Technology, donde la propia empresa explica de forma resumida en qué consiste la tecnología que emplea Lexxe para realizar las búsquedas y generar las respuestas.

     

    ...

    Página Web

    /

    Compartido el 7.5.2009 por Ricardo Alonso Maturana

    Cognition's Semantic Natural Language Processing (NLP) adds word and phrase meaning and understanding to applications like Semantic Search.

    Cognition Technologies es una empresa con sede en Culver City (California) que se dedica a las tecnologías de procesamiento del lenguaje natural (Natural Language Processing, NLP), proyecto en desarrollo desde hace más de 23 años por la Dra. Kathleen Dahlgren, cofundadora de Cognition y CTO, y por un equipo de lingüistas y científicos computacionales.
    En 2007 la empresa lanzó el buscador semántico CognitionSearch, que emplea algoritmos lingüísticos y computacionales para analizar relaciones y asociaciones entre palabras clave con el fin de proporcionar resultados significativos y más precisos que atiendan al contexto de la búsqueda. La tecnología está basada en:
    - Ontología: para descifrar el sentido de una palabra
    - Morfología: para desambiguar las diferentes formas de una palabra (p.e. posible de posibilidad)
    - Sinonimia: para relacionar palabras con conceptos.
    El idioma en el que opera Cognition es inglés y ha seleccionado tres áreas temáticas para aplicar inicialmente su tecnología: salud (MEDLINE, PubMed, etc.), legislación y política. La base de datos que emplea el buscador contiene: 506.000 raíces de palabras, 536.000 conceptos, 17.000 palabras ambiguas, 191.000 frases y más de 4.000.000 de frases semánticas.
    Aquí hay un par de vídeos de demostración en los que explican cómo funciona la tecnología y la búsqueda de Cognition.
    Puedes leer también sobre Cognition en este review de Barbara Quint (abril de 2007) y en la revista Search Engine Journal: "Cognition Search: Doing Search the Not so Google Way", por Arnold Zafra (marzo de 2007), "Cognition Search: Formula for 'Meaning' in Search", por Arun Radhakrishnan (julio de 2007).

    ...