Apply AI: Analyze Customer Reviews Course Assessment Final Exam Answers
Apply AI: Analyze Customer Reviews Course Assessment Final Exam Answers
The Apply AI: Analyze Customer Reviews Course Assessment Final Exam Answers provide a valuable and practical resource for learners seeking to master the use of artificial intelligence in real-world business analysis. This assessment focuses on applying AI techniques such as sentiment analysis, natural language processing (NLP), and data interpretation to evaluate customer feedback effectively. With accurate and expert-verified answers, students can strengthen their understanding of AI-driven insights, improve decision-making skills, and confidently prepare for the final exam while gaining hands-on experience in analyzing customer reviews for business improvement.
1. When analyzing customer reviews, what is the primary purpose of thematic analysis?
- To manually read through every comment and cherry-pick the most interesting ones.
- To count the most frequent keywords and popular words in the text.
- To organize the data in terms of repeated ideas or patterns.
- To generate a simple summary that captures the main points of the text.
2. What is the main risk of simply “throwing it all into a chatbot” to analyze a large dataset of customer reviews?
- The chatbot will be unable to distinguish between positive and negative sentiment.
- The chatbot will struggle to handle text in different languages.
- The chatbot may generalize findings, miss important details, or hallucinate information that isn’t in the data.
- The chatbot’s response will only focus on the positive reviews and skip the negative reviews.
3. What are some common limitations or behaviors to be aware of when using chatbots for data processing, especially with large datasets or tabular data? Select all that apply.
- Chatbots may have daily usage limits or rate limits, particularly on free tiers.
- Some chatbots might default to processing only a sample of your data to manage costs, even if you don’t hit explicit usage limits.
- Some chatbots have a code interpreter feature, which allows the chatbot to write and run code to fulfill your request, then provide the output.
- Chatbots will always accurately guess what you are trying to do even if you don’t specify clearly.
4. When importing or exporting tabular text data between a spreadsheet application (like Microsoft Excel) and a large language model, which of the following could potentially lead to formatting or interpretation errors? Select all that apply.
- The file uses a comma “,” as a delimiter but the data itself contains commas in the text fields.
- The data contains more columns than rows.
- The CSV file is not saved with UTF-8 encoding.
- Special characters like & and < appear in text exported from the chatbot.
5. Which of the following is NOT an example of turning a harder task into an easier task for an LLM?
- Labeling topics: Instead of asking an LLM to generate a list of themes and provide the associated reviews within each theme, ask it to read one comment at a time and label it with topics.
- Grouping topics into themes: Instead of asking the LLM to find all topics that fit in each theme, ask it to step through the list of topics, assigning each topic to a theme.
- Labeling sentiment: Instead of asking the LLM to label the sentiment of one review at a time, ask it to list all positive reviews and then list all negative reviews.
- Creating theme summaries: Instead of asking the LLM to read all reviews and summarize the comments related to each theme, provide a table where all reviews for a given theme are in one row and ask it to analyze a single row of data at a time.
6. Which of the following can be helpful elements to include in a chatbot prompt for data processing tasks? Select all that apply.
- Describe the overall goal
- Describe the input
- Provide a screenshot of the spreadsheet
- Describe the output
- Provide examples of inputs and outputs (few-shot prompting)
- Specify how the data should be processed
7. You have a table with thousands of customer reviews, each with a unique ID. Most reviews are in English but spread throughout are a handful of reviews in other languages. Your goal is to create a table with all reviews translated to English. Which of the following methods minimizes LLM token usage and leverages the LLM’s strength in language processing alongside the spreadsheet’s strength in efficient data manipulation?
- Ask the LLM to output a table containing only the non-English reviews (with the unique ID) and then use a spreadsheet’s lookup function (like VLOOKUP or XLOOKUP) to integrate the translations into your main table.
- Manually find each non-English review and ask an LLM to translate each one at a time. Paste the response back into the table.
- Provide the entire table of reviews and ask an LLM to generate the full table with a new column for translations, copying over reviews that were already in English.
- Ask the LLM to translate any non-English reviews. Manually search for the corresponding rows in the spreadsheet and paste in the translations.
8. When faced with a data processing task and deciding whether to use a chatbot’s natural language capability, a chatbot’s code execution capability, or a traditional spreadsheet application, which of the following statements can help guide your choice to use a spreadsheet or chatbot code?
- Use a spreadsheet or chatbot code if the task involves highly subjective interpretations, nuanced understanding of human language, or creative content generation.
- Use a spreadsheet or chatbot code if the task requires a precise, rules-based approach based on clearly defined logical conditions or mathematical formulas.
- Use a spreadsheet or chatbot code if the data is stored across diverse, unstructured file formats.
- Use a spreadsheet or chatbot code if the task involves multiple languages and emojis.
9. You have labeled each customer review with one or more topic labels in a “topics” column, where multiple topics are separated by a unique delimiter (e.g., “Topic1//Topic2//Topic3”). Which of the following is generally the most efficient and robust method for “flattening” this data into separate rows (i.e., creating a new row for each topic while keeping the other column values)?
- Use formulas in a spreadsheet application like Excel
- Use code generated by a chatbot
- Use the language processing capability of an LLM directly
- Manually copy and paste the data into separate rows
10. Once you have a flattened table with each unique review and theme pair in a separate row, which of the following methods would be best for counting the number of reviews that mention each theme?
- Chatbots always provide better counts because they use AI.
- Pivot tables use clear, rules-based logic, so the count is accurate and easy to verify.
- Chatbots work better with numbers, so they’re more reliable for counting.
- Manually tallying the numbers is the only way to get the correct answer.
11. When analyzing a large dataset of customer reviews, it can be helpful to ask an LLM to give a relevance score for each review, for example a rating on a scale of 0-5 of how important the review is to the business. Which of the following best explains why a relevance score is helpful? Select all that apply.
- A relevance score allows for filtering and nuanced ranking of reviews, which can be used to prioritize the most informative reviews first.
- A relevance score makes sure that all reviews are counted equally to make sure no reviews are left out.
- A relevance score allows for sorting the reviews by length since longer reviews are more useful.
- A relevance score helps automatically filter out negative reviews since only positive reviews are useful.
- A relevance score can allow you to filter out less relevant scores while allowing you to check if any reviews with a low relevance score are mislabeled and in fact relevant.
12. What are some reasons why using a simple keyword search when grouping and analyzing open ended text might fail? (Select all that apply)
- Keyword search can often miss meaning when people use different words to express similar ideas.
- Misspellings or slang in the review can cause keyword matching to fail.
- Keyword search is faster than having an LLM do the task, so the code has less time to think about the task.
- The process may not pick up when the person writing the review added a negation such as “I didn’t hate it – it was actually not bad at all”.
13. Why is it important to have a “human in the loop” to review intermediate steps when performing thematic analysis, rather than giving the entire task to a chatbot and trusting the final themes and counts? Select all that apply.
- Humans can manually retype the chatbot’s answers into a spreadsheet faster than the chatbot.
- Chatbots can sometimes make subtle errors in labeling or counting. Human review at this step helps catch mistakes before they affect the final output.
- Language models are not capable of identifying themes unless they are given predefined keywords. Humans need to provide keywords first.
- Chatbots are always biased toward negative feedback, so a human is needed to balance the results.
- A language model may not know exactly how you and your team prefer to interpret and label the data.
14. When using a chatbot to help improve your prompts, why is it important to review the revised prompt carefully, even if the chatbot’s suggestion seems helpful?
- Because chatbots suggest prompts that only work for numerical data.
- Because chatbots can only revise prompts if you provide example responses first.
- Because chatbots may remove or reword parts of your original prompt that were already working well.
- Because once you accept a chatbot’s prompt revision you can’t go back to your original version.
15. In the process of doing thematic analysis on open-ended text data, we use a combination of chatbots, spreadsheets, and human judgment. Why is it important to choose the appropriate method at each step instead of relying on just one tool?
- Because chatbots work best when given full control over the entire analysis pipeline.
- Because each method is limited: chatbots may mislabel or miscount, spreadsheets require clean inputs, and it can be time consuming for humans to manually label each review.
- Because spreadsheets are always better than chatbots for interpreting context and nuance.Because switching between tools causes more errors in the analysis.