Home > Computer science essays > Natural Language Processing (reflection on lecture)

Essay: Natural Language Processing (reflection on lecture)

Essay details and download:

  • Subject area(s): Computer science essays
  • Reading time: 3 minutes
  • Price: Free download
  • Published: 15 October 2019*
  • Last Modified: 22 July 2024
  • File format: Text
  • Words: 754 (approx)
  • Number of pages: 4 (approx)

Text preview of this essay:

This page of the essay has 754 words.

SECTION 1

Although many of the lectures in CS1 were eye-opening and engaging, the lectures given by Professor Kai-Wei Chang on Natural Language Processing and Professor Harry Xu on Big Data were the most interesting to me.

Professor Kai-Wei Chang’s lecture on natural language processing greatly influenced my understanding of computer science. Before coming to UCLA, my knowledge of computer science was limited to basic coding. I had only created rudimentary programs, and I knew nothing outside of the fundamental concepts. Professor Chang’s lecture helped to expand my understanding by demonstrating some the myriad applications of computer science in the real world. One of the applications of natural language processing that stood out to me was information extraction. As Professor Chang states, information extraction is converting “unstructured text to database entries” [INSERT CITATION]. This particular application is especially useful in the modern world due to the massive amount of data available on the internet. The ability to take this data and put it into a simplified, readable format would be invaluable. Additionally, learning about the challenges associated with information extraction improved my understanding of computer science. I found the challenge of ambiguity in natural language processing to be particularly surprising since it is often assumed that computers have analytical capabilities beyond those of a human. After learning about word sense ambiguity, the problem of “picking the understanding which is more likely” [INSERT CITATION], I realized that modern computers with massive processing power are useless if they are not given the proper rules to follow. This lecture taught me about the capabilities of computer science and the difficulties that arise when implementing these ideas.

Professor Harry Xu’s lecture on big data also influenced my understanding of computer science. While Professor Chang’s lecture focused on the gathering of data, Professor Xu’s lecture focused on using gathered data to accomplish a task. His lecture focused on the field of data science, which entails “collecting large amounts of data and doing something with it” [INCLUDE CITATION]. One thing that I realized during the lecture was that the applications of big data are virtually endless. From online advertising to sports to ocean health, big data exists in and is intertwined with all aspects of our lives. Another thing that made the lecture more impactful was the idea of “using data to build models and make predictions” [INSERT CITATION], the concept of machine learning. Although, I had a vague notion of what machine learning is, learning about instances where it is applied in daily life helped to open up the scope that I viewed computer science in previously. The field has so much untapped potential to benefit our lives. However, with so many benefits, the risks increase as well. Professor Xu said that since the “ability to collect and analyze data will only improve, privacy and manual effort will go away” [INSERT CITATION].

After learning about these applications, I realized how vast and diverse the field of computer science really is as well as the importance of balancing the risks and rewards of implementation.

SECTION 2

In the modern age, the growing presence of technology has left people with less and less time to curl up and read a good book. When a person finally finds the time to read that book, they often discover it’s not well suited to their tastes or the story unfolds in an unsatisfactory manner. A solution to this problem would be to create a unique book for each person based on their preferences. Based on current techniques in the fields of natural language processing and big data, my forecast of a new CS application with high impact on society in 2027 is procedurally generated novels.

My forecast of procedurally generated novels is based on techniques in the field of natural language processing.  To actually generate new books, the computer must have a general understanding of sentence construction and storytelling.  In order to provide the machine with a sense of these concepts, a method called text classification must be used. According to a paper published at Shandong University in China, this method is stated to be “one of the fundamental tasks in natural language processing, targeting at classifying a piece of text content into one or multiple categories” [INSERT CITATION]. By utilizing this technique, we will be able to break down the contents of books and organize them into their underlying structures. For instance, a typical book can be broken down into five stages: exposition, rising action, climax, falling action, and resolution.

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Natural Language Processing (reflection on lecture). Available from:<https://www.essaysauce.com/computer-science-essays/2018-11-30-1543572419/> [Accessed 12-04-26].

These Computer science essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.