Mark As Completed Discussion

Good afternoon! Here's our prompt for today.

A very common interview challenge is to determine how often words appear in a given string or set of strings. Here's a version of this: return a list of duplicate words in a sentence.

Description

For example, given 'The dog is the best', returns ["the"].

Likewise, given 'Happy thanksgiving, I am so full', you would return an empty array. This is because there are no duplicates in the string.

Constraints

  • The total length of the string <= 100000
  • The string will consist of ASCII characters (may be some or all)
  • Expected time complexity : O(n) where n are the number of words in the string
  • Expected space complexity : O(n)

Try to solve this here or in Interactive Mode.

How do I practice this challenge?

GO
OUTPUT
:001 > Cmd/Ctrl-Enter to run, Cmd/Ctrl-/ to comment

Here's our guided, illustrated walk-through.

How do I use this guide?

The Foundation: Understanding the Problem

To find duplicate words in a sentence, we need a systematic approach for analyzing the occurrences of each word. Essentially, our task is to examine each word in the given string and keep a tab on how often it shows up.

Step 1: Tokenizing the Sentence into Words

What Does Tokenizing Mean?

Tokenization is the process of converting a sequence of text into individual "tokens" or units. In our context, this means splitting the given sentence into individual words.

How to Tokenize?

We can start by converting the entire string to lowercase to make our function case-insensitive. Then, we'll use the split function to break it into an array of words.

1sLower := strings.ToLower(s)
2splitS := strings.Split(sLower, " ")
GO
OUTPUT
:001 > Cmd/Ctrl-Enter to run, Cmd/Ctrl-/ to comment

Step 2: Counting the Words Using a HashMap

Why a HashMap?

A HashMap provides an efficient way to store key-value pairs. In our case, the word will be the key, and its frequency will be the value. This will enable us to look up the frequency of a word in constant time.

How to Count?

We'll initialize an empty HashMap (or an object in JavaScript, or a dictionary in Python) to store the occurrences of each word. Let's call this HashMap occurrences.

Counting the Words Using a HashMap

Code Snippet for Counting

Here's how you can populate the occurrences HashMap:

1occurrences := make(map[string]int)
2for _, word := range split_s {
3    _, exists := occurrences[word]
4    if exists {
5        occurrences[word]++
6    } else {
7        occurrences[word] = 1
8    }
9}

Step 3: Identifying Duplicate Words

What are Duplicate Words?

Duplicate words are the keys in our HashMap with a value greater than 1, as these words have appeared more than once in the sentence.

How to Find Them?

We can loop through our occurrences HashMap to find these duplicates.

1duplicates := make([]string, 0)
2for word, count := range occurrences {
3    if count > 1 {
4        duplicates = append(duplicates, word)
5    }
6}

Step 4: Unveiling the Duplicates

Identifying Duplicates Made Easy

After painstakingly counting the frequency of each word, finding duplicates becomes a cakewalk. All we have to do is sift through our occurrences HashMap and pinpoint the words that have a frequency of 2 or more. These are our coveted duplicates.

Conclusion and Complexity

Code Snippet for Identifying Duplicates

Here's how you can extract the duplicates from our occurrences HashMap:

1duplicates := []string{}
2for word, count := range occurrences {
3    if count > 1 {
4        duplicates = append(duplicates, word)
5    }
6}

Analyzing the Complexity of Our Final Solution

Time Complexity Revisited

Let n be the number of words in our input string s.

  1. Populating the occurrences HashMap took O(n) time.
  2. Traversing the occurrences HashMap to identify duplicates also takes O(n) time.

Summing these up, our algorithm works in linear O(n) time, which is remarkably efficient.

Space Complexity Revisited

We've used a HashMap (occurrences) to store the frequency of each word, and a list (duplicates) to store the duplicate words. In a worst-case scenario, each word in the sentence is unique, making our space complexity linear O(n).

GO
OUTPUT
:001 > Cmd/Ctrl-Enter to run, Cmd/Ctrl-/ to comment

One Pager Cheat Sheet

  • This challenge requires returning a list of duplicate words in a sentence that consists of ASCII characters with a time complexity of O(n) and a space complexity of O(n).
  • We can use a hashmap (or JS object or Python dict) to store each word's frequency by looping through each one in the array and recording its occurrence.
  • We can easily find the duplicates by counting the occurrences of each word and looking for those with 2!

This is our final solution.

To visualize the solution and step through the below code, click Visualize the Solution on the right-side menu or the VISUALIZE button in Interactive Mode.

GO
OUTPUT
:001 > Cmd/Ctrl-Enter to run, Cmd/Ctrl-/ to comment

Got more time? Let's keep going.

If you had any problems with this tutorial, check out the main forum thread here.