Open Access

This week we discussed the tricky topic of Open Access in academia. As a young scholar I find myself in a position of seeing the challenges of both sides of this “issue.” On the one hand I can see where as a precarious young scholar, making my work widely available could lead to others easily taking my ideas, not being recognized as more prestigious as one with paywall-protected journal publications would be, and prematurely having to share ideas at earlier stages in my development that might not align with later ideas and works. However, I think these are small concerns in the grand scheme of institutional knowledge. I am a firm believer that knowledge should not be commodified and held captive to privilege audiences as often takes place in academic institutions. The capitalization of academia is creating knowledge as a form of currency where only limited numbers of people have access. This is deeply problematic and a pervasive structure in the US against which I stand. I am currently in a Philosophy of Science course that focuses on the commercialization of science and its worrisome tendencies, especially with patent law. As I move through this class I have come to be deeply skeptical of patenting because this has seriously detrimental effects on actual human lives (as in the case of the patent of AIDS medication that keeps the cost so high that people around the world who are unable to pay are dying at unnecessarily high rates.) Patents hold a level of prestige, and the argument is that they motivate innovation; I think this is acting in a similar fashion to the structuring of “un-open inaccessibility” in that there is an idea about paywall journals and publications motivate high levels of research and knowledge production. I think patents do more harm than good and that they are not necessary for innovation; in a similar sense, I think restricting access does more harm than good and is not necessary for prestige, rigor, and intellectual innovation.

Furthermore, on a global level, restricting access to knowledge is a form of epistemic violence. Communities all over the world I believe have the right to knowledges and histories that are preserved through institutions but often not made accessible. The United States has a plethora of resources for research; thus, I think we are responsible to at least share the fruits of these resources by sharing the knowledge crafted and curated. I’m trying to refrain from going on an anti-capitalist, anti-colonial global pedagogy, socialist-fueled tirade so I will end here by saying I fully support Open Access. This is not an issue of academics, but of ethics, of global responsibility, reparations, and reciprocity.

Digital Humanities Pedagogy

Of all the possible topics within the Digital Humanities, pedagogy is by far my favorite! I love exploring ways to not only incorporate Digital Humanities-based skills and tools into my teaching, but also ways of teaching the field of DH itself. I’ve had the opportunity to observe several styles of teaching Digital Humanities over the years, and their influences have contributed to my own approach. Arguably the most important aspect of my philosophy of teaching (particularly in the Digital Humanities) is:

Your medium should support your message. 

A pitfall for Digital Humanities is to make everything digital, or to shape a research project around trying to use new exciting technologies. Sometimes a paper is the best medium for a particular project to convey the message! I emphasized to my students in the Digital Humanities Mellon Scholars Program that part of the process of developing a research project entails identifying what the message of the project is (which entails considering audience, scope, and project outcome goals) and then determining the best medium for supporting this message. For example, when I introduced documentary-making, the first questions students need to ask themselves is, “Why is it important that I make a documentary? What is my message, and how is a documentary going to enhance, support, and communicate this message?” A possible exception maybe to use technologies as an exploratory research project, like text mining or textual analysis across extensive bodies of text. However, even then one could probably articulate why that approach is best for the type of work one is interested in researching.

Something I have learned after teaching an introduction to Digital Humanities course and working with undergraduate researchers is that students greatly appreciate taking the time to discuss ways to utilize Digital Humanities skills, methods, and theories in future spaces of higher education, but also outside of academia in nonprofit organizations and industry. I recently attended a conference and co-facilitated a workshop with Arianna Montero-Colbert about Digital Professional Development. The structure was flexible and based on the questions the undergraduate researchers asked, I ended up showing them my CV to demonstrate all the ways of communicating Digital Humanities skills into a CV/resume. In the classroom, students asked about technology capacities that could work in spaces outside academia. Having worked as Community Coordinator for Lighthouse Immigrant Advocates, a nonprofit immigration law office, I was able to have an open conversation about how to use Digital Humanities as a selling point for being hired and how to then incorporate these skills into roles at nonprofit organizations. For example, just showing my students how to install and customize WordPress plugins that can take donations is a valuable, applicable skill.

The biggest challenge I faced teaching the introduction course was finding undergraduate-appropriate readings. As of now, much of the literature is rather advanced, technical, and jargon-laden. Assigning blogs, podcasts, and YouTube videos was much for effective than book chapters, but much of the  foundational histories of the field are stuck in lengthy chapters that deal far more with theoretical debates than with useful, graspable concepts that introduce students to the field, its history, its shortcomings, and its strengths. My time as a student and then as a teacher has inspired me to consider making an Introduction to Digital Humanities anthology that organizes aspects of the field in a structured and digestible manner. I have no idea if I can accomplish since I would want to include work that is already out there in addition to some of my own work. Perhaps if I can successfully land a Digital Humanities–Philosophy dual appointment position I will embark on such a project.

In regards to the assigned readings and materials for this week, I love the University of Minnesota’s “Accessible U” site. It is such a great resource and one I needed years ago! In fact, this semester as a TA one of my students in a recitation section is legally blind. We have developed a strong relationship with open communication about meeting her needs, but the more I can preemptively do so that the burden of responsibility to tell me all of the specifications needed, the better. For example, the process of using Word Document heading sizes for screen readers is such a simple but valuable avenue for making content more accessible.

I also really enjoyed Mark Sample’s talk-turned-article “Building and Sharing (When You’re Supposed to be Teaching). As a side note, I’ve had the privilege of meeting Mark and it’s fun to see his personality and voice clearly reflected in the language of this article. I love his point that students tend to write, research, and create for an audience of one, for the professor or instructor. Digital Humanities projects are almost always publicly available though, so shifting this notion and letting it influence the way students engage with projects is significant. His statement that classrooms were made for sharing brought back memories of kindergarten show-and-tell days. Maybe we as teachers need to bring more of the spirit of kindergarten back into the classroom: a space to show-and-tell, to share, to create, to make messes and explore, to work together (and maybe also nap time?).

Lastly, I’ve sensed a scrambling around how to teach the Digital Humanities that is only starting to be documented, formalized, and interrogated. I think those of us engaging in this field have a unique opportunity to develop pedagogical strategies and course designs that are highly reflexive and anticolonial, and that include queering language, intentionally making spaces accessible, and avoiding canonizing the same patriarchal, whitewashed Anglo-centric narratives. Other disciples have been around for centuries and the fight for change is a slow one. The Digital Humanities holds promise for being intentionally formed and avoiding the aforementioned pitfalls. We need more Marisa Parhams, Miriam Posners, Safiya Umoja Nobles,  and Zeynep Tufekcis.

Text Mining

A Word Cloud of all the words I have used in all of my blogs.

I will shout from the heavens that I love voyant-tools! Before Voyant I didn’t have much of a concept or appreciation for text mining. I was first introduced to Voyant by Laura McGrath when I was a TA for the Mellon Scholars Program at Hope College. She was a English Major and much of her work involved distant reading and extensive text analysis, so she built a lot of the Mellon seminar around working with data sets, utilizing text and data analysis tools, and producing visualizations. I was blown away at the speed, user friendliness, and insightfulness of Voyant. I love the visuals with colors, the interactivity, the multilingual capacity, and the embedding functions. Such a brilliant tool! Okay, rant over. I was inspired and chose to incorporate this as a unit in the Mellon seminar when I taught it the following year as a post-bacc. (See Blog Tutorial here) I had my students do a simple activity to introduce them to Voyant and similar methodologies by having them read a Edgar Allen Poe short story about a murder mystery, “Murders in the Rue Morgue.” When they came to class having read the story we then ran it through Voyant to see what the tool might reveal about the text that was unexpected, and whether or not we could see any representations that were accurate reflections of the text. For example, looking at terms like “murder” and “weapon” in the Terms Berry visualization revealed in-text close presence with “monster” and “animal” which corresponded with the murderer actually being an orangutan (spoiler).  Additionally, since I intentionally chose a murder mystery, could Voyant reveal any crucial plot points, who the main character is based on name recurrence and so on? The Word Cloud matched the close reading that the window was key to unraveling the mystery and that Dupin is the main character. It was a fun, light exercise but feedback from students was great in getting their feet wet with what text analysis and mining could do.

In general I have not conducted extensive research using text mining but through projects like Ben Schmidt and Mitch Fraas’s “Mapping the State of the Union,” I have come to appreciate its potential, as well as the nuances and intentional decisions that get made about key terms selected. Lisa Rhody’s “The Story of Stop Words,” did an exception job of bringing to the forefront many of the nuanced decisions and implications of said decisions in the realm of text mining. I think that is one of the main takeaways I have for this seminar thus far: that every facet of a project involves intentional decision-making, which prompts the reality that bias in research at all stages, whether conscious or not, is quite possible, and one should actively interrogate their decisions, and document them. In fact looking back, before exposure to Voyant, Brandon Walsh ran a workshop for the 2017 annual Undergraduate Network for Research in the Humanities conference; there he introduced the concept of text mining, walking us through “bags of words” and what it means to make selective choices about which words to include and the impact such choices can have. It seems that because the Digital Humanities incorporates technologies there is a false perception of the field that this scientific technology makes research more objective. I think this is wrong on two accounts: one, scientific research is value-laden and not as objective as one may want to think (as my Philosophy of Science course would attest, see Robert Merton’s Social Theory and Social Nature but also Safiya Umoja Noble’s Algorithms of Oppression highlights the biased, prejudiced nature of human-made technologies many take for granted as objective, like Google searches). Two: the number of perhaps small, but still significant, deliberate choices that have impact on the scope, nature, results, and effects of a project within the Digital Humanities through using such technologies to me indicates a higher degree of subjectivity, and this, I’d argue, is extremely important to remember.

This week I played around with an idea I mentioned in class: putting my own papers into Voyant to see if I can identify a particular style. Not the most relevant to my research, but text mining is not exactly in my area anyway, so I went for something different and relevant for me as an emerging scholar. I made the following decisions:

  • I chose the final papers I wrote for my first semester graduate school seminars
  • I did not include the bibliographies
  • I did include the Titles of the papers
  • My citation format is MLA so there are in-text parenthetical citations

This Word Cloud is fascinating to me. It definitely fits with what the content of the papers are, but it’s also helpful to track recurring themes in my work, and possible topics for my dissertation.

I think the most interesting results of the Terms Berry is showing the relationship of church, which occurs with Maracle, indigenous, and structure, and the word spiritual occurring with music, physical, and knowledge. This has reignited my passion in these subjects and given me an interesting lens into my writing style.

I also noticed that the Bubblelines tracked the main terms across all of the papers, noting that language and body were the most consistently-used term throughout all of my writing. Perhaps Philosophy of Language is where I am headed…

Visualization and Networks

This week focused on visualization tools with an emphasis on relationships or networks within data. I decided to pursue my more law-focused passions and see what such tools could show about the current state of immigration within the United States. After some quick searching I came across the United States government’s Department of Homeland Security datasheets about immigration, particularly about remittance inflow and outflows. Remittance is the sending of money either from someone working in the United States to family in one’s country of birth or vice versa, receiving money from one’s country of birth while in the United States. I downloaded the spreadsheet tracking remittance inflows (which I believe means the amounts going from folks in United States to other countries) since 1970 to 2018. The results were interesting to look at for each country as some increased rapidly and others decreased. I was most interested in looking at Latin American countries like Mexico. What this shows is a staggering amount of folks from Mexico working in the United States are in fact here to support family. The Breve chart (screenshot below) captures this steady increase in remittance inflow, currently at an all-time high in 2018 of 33,675 million and representing 2.8% as a share of the GDP in 2018.

On the World Bank site where I found more information I came across this brilliant TedTalk that uncovers the broken system of remittances and just how much of the humanitarian relief weight immigrants are bearing. In fact, annual remittances account for $413 billion sent to developing countries which is over three times the annual amount spent in humanitarian relief aid ($135 billion). Millions of immigrants are NOT sneaking into the country to try to make it rich or coast off US government programs but are in fact a massive driving force in sustaining their own countries, no thanks to the shady colonial, exploits of the United States in developing countries.

As for other tools, Palladio is brilliant when it cooperates. I’ve used it before and when it works, it’s a remarkable streamlined tool. I have the most difficult time getting the mapping part to work. Even after painstakingly entering in Longitude and Latitude for every country listed on the United States Department of Homeland Security data on how many people from countries all over the world were “apprehended” between 2005-2014. The exact language is “Aliens Apprehended by Region and Country of Nationality” which is all kinds of offensive. Here is an image of the spreadsheet with the input coordinates columns I added. Click the photo to access the full spreadsheet.

I was hoping Palladio would show sized nodes on a map with layers to show changes over time, but even after copy-pasting coordinates from this chart I still had no luck, so below is a Palladio graph that looks like a sunflower showing sized nodes indicating how many people were “apprehended” corresponding to different countries.

Not exactly helpful. But the map would be, and again, when Palladio works, it’s great. Here is a link to a tutorial I made for Palladio during my post-baccalaureate position for the Mellon Scholars Program. For general reflection, visualization and network programs are incredibly powerful tools that enhance arguments and clearly communicate relationships. In some ways I think these tools can be the most difficult to conceive of for a research project but can be the most rewarding when applicable and effectively utilized.

Spatial and Temporal Visualizations

One of my favorite types of projects in Digital Humanities is mapping. Mapping can tell stories, argue, and reveal different perspectives. The readings from this week talked about several tools and platforms I have used before, but I was surprised by the sheer number of other tools of which I had not heard! The volume of options can be overwhelming, but I like the flexibility this can allow for when crafting a specific project.


One of the main tools we dealt with this week was Knighlab’s StoryMapJS. Having used this before, I was reminded by how clean, straightforward, and pleasant the interface is. Google Maps is great for its layers, media integration, and easy import/export of data, but what I appreciate about StoryMap is that it does force a linear narrative, or direct path for interactions. This is not always convenient for every type of project, but for a project I did investigating the life and philosophies of celebrated Indian poet, playwright, musician Rabindranath Tagore, StoryMap was a great tool. In particular, I was exploring Tagore’s philosophy of education and architecture and how the two influenced one another. Thus, space was very important. I needed to be able to show a chronological progression of Tagore’s life and influences, as well as his geographic sites where, for example, he designed, built, and ran a college in Santiniketan.

In terms of just chronology, Knightlab’s Timeline JS is similar, but the flexibility of using a spreadsheet, especially one integrated with Google Drive is excellent. As I have studied various ethical theories, I tried to keep the progression of thinkers and influences all in my head but found this frustrating. So, I created a small timeline of key thinkers/writers within the Western canon to outline the evolution of ethical theories and standpoints in Western ethics. I say Western repeatedly because to think that these are the only theories of ethics and the only contributors to the field is vastly mistaken. Unfortunately it is treated as though it is the all-encompassing field of ethics and it accompanies the courses I have taken in ethics. So, this timeline is small and specific, but I found it useful to visualize and conceptualize how one theory reacted in contrast to another or evolved into a different one.

Having made this timeline several years ago, I found making a new timeline refreshingly familiar. I decided to display the typical history of the Digital Humanities as if often presented/the dominant narrative. Then I wanted to show the holes in this story and the silenced contributions by creating a counternarrative. The two timelines are here:

While I do not have datasets for my current research, watching class demonstrations of the exciting possibilities of Tableau makes me wish I did have a dataset! What I liked about this tool in addition to its clean interface was the integration of labels and colors with data points to have options for how best to represent quantities and information. I am thinking about doing a pedagogy-specific project sometime in the near future, and I wonder if doing some assessments of education systems, allocations of resources, measures of success and so on mapped across the United States might serve as a compelling visualization to argue that educational segregation essentially persists and that the country needs intentional reform…perhaps! All in all this was a fun week and a good source of inspiration to think about projects in new ways.

Audience Engagement

This week in the DH865 course we were delayed by a true Polar Vortex, hence a delay in this post. However, the focus of the week, audience engagement, is one that can never be delayed when creating a digital project (in my opinion). I argue that it should drive the research project, and be one of the very first aspects considered when designing the project. The intended audience should influence numerous decisions, especially in the design, including the style of language (how much jargon versus explanation is acceptable), use of graphics, and platform choice. I loved learning about the Mukurtu project specifically designed to preserve and share indigenous aboriginal knowledges with other tribe members. The article, “A Community of Relations: Mukurtu Hubs and Spokes” by Kimberly Christen, Alex Merrill and Michael Wynne, did an excellent job of demonstrating the specific design decisions to meet the needs of the intended audience who is concerned with sacred knowledges only to be viewed by certain people within the community. The tension in the digital age is noticeable with audience engagement considering that on the one hand most digital projects are accessible to anyone with an internet connection, but on the other hand, each project is not intended for every single person in the sense of utility and comprehensibility. Perhaps it may be best to be transparent about the intended audience while nodding to the fact that others can certainly view and potentially benefit from the project. I think Miriam Posner is particularly good about this with her blogs.

Considering my own audience, much of my work is for the world of academic. Philosophical engagement is more often full of jargon and nuanced argumentation styles that cater to other philosophers and academics. However, this is exactly a problem I see with the discipline. I think philosophy is most valuable when it is applicable beyond the walls of academia. This is my social justice orientation showing. Thus, my audience for digital projects is typically an academic one, but hopefully also a broader one when dealing with matters of social justice. Because I don’t yet have a area of specialization I can’t say much in terms of specifics, but in reference to recent digital humanities projects, I can say that my audience includes musicians (professional, academic, and otherwise), Christian/theology philosophers, Continental philosophers, and hopefully as I learn more, decolonial/anticolonial and feminist scholars. As a dual degree student pursuing a JD with an emphasis in immigration policy and reform, my audience may also come to include policy makers and/or individuals going through the immigration process.

In addition to various readings about audience and audience engagement, we read about the idea of creating project personas. As mentioned in class, this process has some benefits (especially for larger companies) but I think it would be uncomfortable and problematic to creating characters with backgrounds, epistemological standpoints, and livelihoods based on people you know for the sole purpose of company development. So, when thinking about possible personas for my digital work, I kept them generic in some sense, because any more specifics in terms of gender, race, class and so on again seems problematic.

  • My first persona is a music educator who teaches high school choir. This individual teaches for 6 hours a day, working with different choral ensembles to both teach students about music and theory, as well as bring students and the community together in collaborative concerts and events. This music educator has been teaching for 12 years and is quite familiar with the school and local community and is also up to date on the politics of music education. This individual’s goals are to teach students about music and theory principles, develop good singing techniques, foster team work and pride in collective music-making, and give back to the community. This individual would be affected by work in aesthetics and pedagogy in terms of how this individual teaches and perceives music and the role of theory.
  • A second persona is a typical Continental philosopher, tenure track at a small liberal arts college in the United States. This professor does conduct some research but is primarily focused on teaching philosophy courses like Applied Ethics, Continental Feminism, or Moral Psychology. This professor’s main concerns are to continue to be knowledgeable in the changing nature of these fields, receive tenure, and introduce important philosophical concepts like critical thinking, argumentation, writing skill improvement, and consideration of alternate perspectives.
  • A third persona is a immigration attorney working in a nonprofit. This attorney is early in the career and full of energy and passion, though certainly tired and worn down daily by the obstacles of the legal system. The attorney is constantly up to date on immigration policy and looking for ways to inform the local communities–both people in need of immigration services and those who do not–of the issues and nuances of immigration processes in the US.  The attorney has passion for social justice and is concerned with serving immigrants, refugees, and undocumented people, as well as educating larger communities.

Bearing these possible personas in mind, I want to be transparent about my own values and interests, because I think they deserve to know this. I recognize that these are three very different personas so not all of my research fits for each space. Therefore, I will continue to be conscientious of language choice and academic references. As a first year, I look forward to continuing my studies so that I can then be more specific and understanding about who my audience will be. Until then, these are a broad series of considerations for possible future digital projects.


Data in Humanities

Archives, Data, and Humanities: A Philosopher’s Reflections

This week our Digital Humanities seminar served as a good reminder of the possibilities and breadth of data potential in humanities fields. Miriam Posner’s blog “Humanities Data: A Necessary Contradiction” was not only an excellent introduction to the notion of all objects bearing metadata, but also a further case for why philosophers should consider data-based projects because data speaks, data can argue. Though I am not a historian and do not actively seek out archival projects, I have had a few experiences with archival-turned-dataset research projects that have taught me a great deal about the local history of Dutch Holland, Michigan, Portuguese influence in the former colony of Goa, India, and the vast works of Indian philosopher and writer Rabindranath Tagore. Visiting archives was both astounding and concerning for different reasons. (As an aside, there was something so profoundly sad about visiting the Goan archives in India and seeing worn, worm-eaten, molding diaries falling further into decay. The loss of cultural history like that hurts the soul.) As discussed in class, it is important to remember that an archive tells a story, and there are those in control of this narrative actively deciding to sculpt this story in a particular fashion. Remembering that archives are the results of decisions made by specific people is crucial to pushing against problematic understandings of history and modern culture; one must challenge easy excuses that historically oppressed or marginalized communities were not participating in events and narratives, because more often than not these communities have been intentionally curated out of such narratives. For example, during my sophomore year of college I was struck by this fact when I was faced with the task of producing a Holland-based digital humanities project. I was concerned about the lack of visibility of the Hispanic/Latino community in Holland both in terms of businesses and physical design and presence (or rather lack thereof) in the archives. According to the most recent census comprises nearly 30% of the Holland population and yet there are next to no references of this community in the archives. There was such a contradiction between what the Tihle Archives said was the history of Holland and what the actual communities, physical architectures, and ongoing traditions like Fiesta, said was the history. I loved the Data Feminism book by Catherine D’Ignazio and Lauren Klein because this book specifically addressed the ways in which data can be shaped to ignore, or, in contrast, intentionally reveal undocumented narratives. This focus on articulating narratives, especially counternarratives to the dominant historical discourse was one I sought going forward into actual data-centered projects like Ethics of Expropriated Art, involving museum permanent collection data that demonstrated power dynamics and complex international relationships in art expropriation. This project taught me about the challenges of data curation and standardization.  The readings by Gilliland, Tanner, Milligan, and the Library of Congress all pointed to various facets of data and metadata curation standards and practices which were insightful and would have been incredibly helpful when I was designing my project! Though I am still not 100% clear on all of what TEI does, from what I do understand, this is just one more tool to help systematize, organize, and standardize data to make it accessible and computer analyzable, which is fantastic. Also these readings reminded me of how much I love that Omeka lets users add their own metadata categories. The flexibility is so valuable for big messy projects!

When considering the new capabilities of big dataset curation, I am fascinated by the new possibilities of research approaches. Specifically, with data analysis and visualization tools like Voyant, Palladio, and Raw Graphs, plugging datasets or text files into these programs can actually prompt questions, not just attempt to reveal answers. I liked reading Franco Moretti’s book Graphs, Maps, and Trees in my undergraduate years because he dissects the ways in which computer readings of texts present new perspectives and questions for exploration that may not have been realized otherwise through close readings. As aforementioned, philosophy does not lend itself to many obvious avenues for data-based projects, so I have not had extensive time to devote to this method of work; however, my understandings of this type of research were broadened by Moretti and then greatly enhanced when I designed and taught the datasets unit of the Mellon seminar. I reengaged in the process with my students as they chose sources like rare books or twitter hashtags to curate into spreadsheets, ask research questions of the data, run them through data analysis and visualization programs, and draft prospecti about projects based on these initial findings. Taking students through this process was tedious for them, much more so than working with pre-made datasets, but I think it was valuable for them to see from just how many sources they can glean valuable information and compelling research topics. Most importantly and most relevant to being a philosopher, this work can form arguments, and strong ones at that! For me and many of my students, this type of work was the first of its kind to be argument-based but without just citations and occasional statistics. Graphs, maps, tables, charts, and figures coalesced into robust statements that translate to broader audiences. I love this aspect of the digital humanities and though philosophy may not be an obvious or easy fit with this type of work, when the two can come together, I think there is great potential for powerful projects, especially in the areas I am interested in: Latina Feminism and decolonial/anticolonial studies.

Tools and Reviewing Digital Projects in Philosophy

Digital Tools and Reviewing Digital Projects in Philosophy

This week in DH865 we discussed various approaches to digital project review within our areas of study. Additionally we were asked to find digital tools that might contribute to and foster digital work in our focus areas. Admittedly I have a pessimistic view of philosophy’s relationship to the digital, but hey, good philosophers are supposed to be critical, right? Philosophy as a discipline has been historically resistant to change and keeps the gates to entering the discipline quite guarded. While certainly traditional methods of reading and writing are still great forms of philosophical engagement, I think that this digital age offers unique opportunities to the field that ought to be considered and incorporated. Unless one is studying metaphilosophy, philosophy always needs another subject about which to philosophize. These subjects can be interacted with and presented in various forms and fashions. For example, philosophy of art or aesthetics must involve art, which can be digitally rendered and can bring new perspectives and experiences to viewers in ways that a paper could not. Unsurprisingly, there are not many digital projects and tools out there specifically by or for philosophers. However, I would argue that with some creativity and willingness, philosophy could make use of some of the incredible tools that already exist and bring the discipline out from behind the gates into the deeply intersectional, interdisciplinary world of the 21st century.

For the projects already pushing into the digital, interdisciplinary spaces of philosophy, serious challenges arise for review! I found the individuality of each type of philosophy project difficult to review with standardized guidelines because depending upon of what the philosophy is will influence what aspects should be considered. Philosophy of art/aesthetics might need some review criteria specific to art and visualization whereas metaphysics may require quantifiable data analysis and proper scientific citations alongside the philosophical aspects. As a more creative Continental type of philosopher, I found myself more inclined to consider creativity and design, which I suspect is a different emphasis from my analytical philosopher counterparts who would more deeply consider word choice and argumentation style. All in the all the process was a useful time of disciplinary reflection and a healthy reminder that this process is challenging.

Digital Tools:

Project Vox

Stanford Encyclopedia of Philosophy

Center for Digital Philosophy

Public Philosophy Journal

Mapping & Tools: Neatline,StoryMap, Google Maps, TimelineJS

Text and Data Analysis & Visualization Tools: Voyant, Palladio, RAWGraphs

StoryTelling Tools: Scalar, Documentary technologies (Final Cut Pro), Unity

Digital Projects in Philosophy Review Rubric:

  • Argument: Is the argument for this project clear and effective? How is this argument supported by the medium(s) through which it is presented? Is this argument contributing to the field and engaging with philosophical discourse on this subject, considering other voices and perspectives in this subject area, assuming such discourse exists, though it may not? Similarly, does the project adequately support its claims through other evidence and proper citations?


  • Creativity/Ingenuity: How is this project contributing something new to the philosophical discourse on this subject, assuming such discourse already exists, though it may not? Does the project consider possible objections? Are the digital or technological components of the project integral? In other words, is this an example of a truly digital humanities (in philosophy) project, or is this a humanities/philosophy project simply made digital?


  • Methodology: What tools did this project utilize? Are the technological aspects of the project informing the non-technological aspects of the project and vice versa to creatively craft an interdisciplinary final product? Is the technology enhancing the overarching message in a clear and effective manner? What sources did this project require? Does the project properly address all labor contributors? Did the process of creating this project call for human subjects and if so were the proper guidelines respected (media release, copyright, etc.)?


  • Audience, Design, User Experience & Accessibility: Who is the target audience for this project (ex. students, colleagues, broader communities, nonprofit organizations etc.)? Is this project a pedagogical tool? Does the project address issues of accessibility both in considering who the target audience is in terms of language use and design, as well as general accessibility issues of font size, color choice, and translation? Does the project use inclusive language? Is the project making assumptions about the universality of certain experiences? Is the project easy enough to navigate, use, and understand? If the target audience is likely to view the project on a phone, is the project mobile-friendly?


  • Sustainability Considerations: Has the project creator(s) considered sustainability of the project? If so, what are the intended plans for maintenance both in terms of technologies and content?

Reviewing “Project Vox”:

Project Vox is a feminist philosophy project dedicated to expanding the notion of the philosophical canon in the early modern period by highlighting the lives, works, and key contributions of women who were and are too often ignored. The project is a digital archive of information about several significant women during this time and their works, a digital image gallery of texts and paintings to broaden typical approaches to philosophy, and syllabi examples for courses that intentionally incorporate women philosophers. While the project involves multiple focuses and sub-projects the main aim of Project Vox is to cultivate feminist engagement with modern philosophy for students, faculty, and historians. In essence the project is rewriting the traditional canon and historical understandings of idea development and transmission during the 1600s-1800s in Europe. The project began in 2014 with a group of faculty and staff at Duke University collaborating and eventually launching the official website in March 2015. Now several years later the project has received recognition in major outlets like The Atlantic, The Washington Post, and Times Higher Education Supplement. Additionally, Project Vox has sponsorship from the Andrew C. Mellon Foundation and the National Endowment for the Humanities (1).

Project Vox uses multiple tools and methodologies to reach its intended aims. The hub for Project Vox is a WordPress site containing the extensive biographies and descriptions of philosophical ideas of Mary Astell, Lady Masham, Margaret Cavendish, Anne Conway, and Émilie Du Châtelet. Additionally the site hosts an embedded timeline created with Timeline JS through Knightlab, which traces the history of key figures, men and women, and their works during the modern period. A photo gallery of significant locations, portraits, texts, and paintings constitutes another facet of Project Vox’s attempt to broaden typical engagement with philosophy as a predominantly text-based discipline, as well as foster best practices for students. “We aim to help students become savvy consumers of digital images. In disciplines such as philosophy, which has largely ignored visual culture, the digital presentation of images—including everything from photographs of portraits to digital facsimiles of books—is rarely held to the standard used for the interpretation of literary works.” (2) To address these concerns, Project Vox has a rigorous review and research process for new contributions to the site, be that images, translations, texts, or syllabi. This process is clearly spelled out and explained on the “Methods” page of the site.

The pedagogical focus of Project Vox deliberately encourages faculty and graduate student viewership, though the site is rich with resources about the modern philosophy period that is useful for undergraduate students as well. What is especially excellent about this project is that it embodies the principles it seeks to instill for its viewers; in other words, Project Vox clearly demonstrates best practices of proper citations, media usage and copyrights, and clear recognition of labor contributions. The technological aspects are clean and all facets appear to be in working order. The site is somewhat mobile-friendly but certain features do not work, i.e., the interactive timeline. The design is clean and straightforward, though the font size is on the small side and could perhaps benefit from being slightly enlarged to assist viewing.

Overall the aims of Project Vox appear to be largely successful and with a project team, advisory board, academic partners, and large sponsorship, the sustainability of growth of the project seems promising. As this project grows, I would encourage contributors and project team members to consider other countries’ contributions to aspects of modern philosophy, as the focus for Project Vox is currently quite Western European-centric. Doing so would not only bring to light more women and their contributions to modern philosophy, but also it would be a deliberate decentering of the Western canon and an inclusion of the voices and ideas of people of color from around the world. Seeking out the thought of Indigenous philosophies, East Asian philosophies, and Latin American philosophies, among many, would enhance Project Vox’s feminist endeavor to expand the canon and would challenge the Western-centric tendencies of philosophy education.

Documentaries Tutorial

One of the biggest undertakings in the digital humanities is a documentary project. These are time-consuming and require a plethora of skills and organization. As you may have noticed from my own documentary that I made a sophomore, I am anything but a professional. But, having gone through the process, I am aware of the workload and necessary steps to complete one. Thus, I present to you a Prezi tutorial that outlines the main steps to consider when making a documentary. I presented this to class on Monday, then on Wednesday I created a play-with-equipment activity day in which the students visited stations with different types of audio and video tools. At these stations they had challenges to complete as a hands-on, team teach yourself experience. The students really enjoyed getting to interact with the equipment and while they didn’t learn the exact ins and outs of each tool, they left feeling confident enough to play with them and figure it out on their own beyond the basics that they did learn in class. Click the image below to view the Prezi.

As always this presentation is the intellectual property of Taylor Elyse Mills and is protected under a Creative Commons License. That said, I want to thank the individuals whose materials are also included in this presentation. I do not take credit for the images nor OHLA’s information about Oral Histories referenced in this Prezi.

Text & Data Analysis Part 3: RAWGraphs Tutorial

Some of the first ventures into the field now known as the Digital Humanities began when humanists wanted to use computers to read and analyze large bodies of text. Starting with Father Busa’s computer-readable project of Aquinas, today some of the most popular DH projects are text and data analysis-based. This tutorial highlights how to use and understand one of three unique analysis tools. RAWGraphs is an easy-to-use free online program that coverts highly quantitative data sets into visually attractive graphs. This tool is very easy to use so my tutorial feels almost unecesary, but as it is often unheard of, I will include it in the three-part text and data analysis series. Click the picture below to find the PDF tutorial!