Hey! If you love Python and building Python apps as much as I do, let's connect on Twitter or LinkedIn. I talk about this stuff all the time!

Remove Duplicate Values from a List in Python

Learn how to remove duplicates in lists using various methods, including list comprehension, sets, and the dict.fromkeys() method. …


Updated July 4, 2023

Learn how to remove duplicates in lists using various methods, including list comprehension, sets, and the dict.fromkeys() method. Removing Duplicates in Lists in Python

Removing duplicates from a list is an essential task when working with data in Python. You can achieve this by using several methods provided by the language itself. In this article, we’ll explore these approaches, focusing on readability and simplicity. Our goal is to provide you with a comprehensive understanding of how to remove duplicates in lists efficiently.

What are Duplicates?


Duplicates refer to identical elements within a list or any other data structure that repeats values unnecessarily. Removing duplicates helps maintain the integrity of your data and improves computational efficiency when working with large datasets.

Method 1: Using List Comprehension

One simple way to remove duplicates from a list is by converting it into a set, which automatically removes all duplicate elements because sets in Python cannot contain duplicate values. However, this approach will change the data type of your original list. If you need to keep the original list’s structure (e.g., maintaining the order), use list comprehension or dict.fromkeys().

my_list = [1, 2, 3, 4, 5, 2, 6, 7, 8, 9]
# Using set will lose the original list's ordering
unique_set = set(my_list)
print(unique_set)

# Maintaining order with list comprehension
ordered_unique = []
[ordered_unique.append(x) for x in my_list if x not in ordered_unique]
print(ordered_unique)

Method 2: Utilizing Sets


Sets are collections of unique elements. You can convert your list into a set, and back into a list to remove duplicates while preserving the order using an OrderedDict.

from collections import OrderedDict

my_list = [1, 2, 3, 4, 5, 2, 6, 7, 8, 9]
# Using dict.fromkeys() preserves order for Python 3.7 and above
unique_ordered_dict = list(OrderedDict((x, None) for x in my_list).keys())
print(unique_ordered_dict)

Method 3: The dict.fromkeys() Method

This approach is similar to using an OrderedDict but more memory-efficient as it doesn’t require the overhead of storing values. It’s especially useful when you need to maintain order and don’t care about the keys' hashability.

my_list = [1, 2, 3, 4, 5, 2, 6, 7, 8, 9]
unique_dict_keys = list(dict.fromkeys(my_list))
print(unique_dict_keys)

Conclusion:


Removing duplicates from lists is an essential task when working with data in Python. This article has covered three methods for achieving this: using sets and maintaining order with dict.fromkeys() or list comprehension. Depending on your specific needs, choose the method that best fits your requirements, keeping in mind the Fleisch-Kincaid readability score of 8-10.

Stay up to date on the latest in Python, AI, and Data Science

Intuit Mailchimp