Skip to content

Simple script for parsing and cleaning up a wikiquote data dump to create a JSON dataset of quotes.

License

Notifications You must be signed in to change notification settings

heyseth/wickedQuotes

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

wickedQuotes

There aren't any large, public datasets of quotes to be found online, so I decided to create my own by parsing and cleaning up a Wikiquote data dump.

Setup

python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

Usage

Download a data dump of wikiquote:

wget https://dumps.wikimedia.org/enwikiquote/latest/enwikiquote-latest-pages-articles.xml.bz2

Extract the archive:

bzip2 -d enwikiquote-latest-pages-articles.xml.bz2

Run the program:

./parse.py enwikiquote-latest-pages-articles.xml

There are two optional parameters: quote cutoff length, and desired language. The default cutoff length is 100 characters, and the default language is English. The language must be specified as an ISO Language Code.

For instance, if you wanted quotes only in Spanish, and less than 50 characters in length, you would enter the following:

./parse.py enwikiquote-latest-pages-articles.xml 50 es

Alternatively, if you don't want to specify a language, simply enter "all" (no quotes) for the language parameter. This will massively shorten the time it takes the program to run.

License

This project is licensed under the MIT License - see the license.md file for details.

Acknowledgments

Huge thanks to:

About

Simple script for parsing and cleaning up a wikiquote data dump to create a JSON dataset of quotes.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages