Designing and Building for Ourselves

Originally written and published on LITA Blog, http://litablog.org/2017/04/designing-for-ourselves/ 


I’m in the throes of designing a new help desk for our department that will serve to triage help tickets for approximately 15,000 employees. This has been a major undertaking, and retaining the confidence that I can get it done has been a major challenge. However, it’s also been a really great exercise in forcing me to be introspective about how I design my own ethics and culture into the system.

When we design and build systems for ourselves, we design for what we need, and if you’re like me, you also aim to design for simplicity and the least work possible that still accomplishes your end goal. When I’m designing for myself, I find that I am more willing to let go of a feature I thought I needed because another one will do the job okay, and okay was enough, especially if it means less work for me.

Designing for ourselves in a way is easier than designing for someone else. You essentially know what you need; there’s no guess work or communication gap. Yes, we can get caught up in semantics about how we may not actually understand what we need, and thus you may build something that doesn’t achieve the end goal you had. But hopefully, in the process, you evolve and learn to design and build what you really need.

Also, designing for ourselves forces us to let go on the complex and unnecessary features and build a more simple product that will hopefully be easier to maintain over time. I do not know a time while working in libraries where we (library folk) were not hooting and hollering about the awfulness of the library technology ecosystem. As I mentioned, I’m in the depths of designing a new service desk for my team (in JIRA Service Desk), and I find myself asking “Do we REALLY need this? Can this complex setup be accomplished through a different, simpler method? Can we maximize the use of this setup and use it in more than just one functional way?” When I have to do all the legwork, I think more carefully about essentials and nice-to-haves than when we hired someone else and I was the “ideas person” – and probably much less flexible on the tedious items.

If the load that I carry and my intimate connection to the build force me to think differently about what we do and don’t need, this suggests that maybe we have the wrong people designing library systems. Or at least maybe we don’t have the right people involved throughout the design and build process. Vendors need to include librarians who work in the trenches in the design process. There needs to be representation from the academic, public, corporate, museum, medical, special, etc. communities,  at a level that is more than just “We’re looking for feedback we might incorporate in the future!”  I don’t yet have an answer to how we can accomplish that, but I have ideas on where to start. Stay tuned for “Why you should leave your library and work for the ‘Dark Side.’”

The flip side to this is that maybe my intimate connection with the workload also encourages me to overlook and take shortcuts that seem fine but really ought to be examined carefully. What comes to mind is a presentation I refer to frequently: Andreas Orphanides’ Code4Lib 2016 talk Architecture is politics: The power and the perils of systems design[1]. Design persuades; system design reflects the designer’s values and the cultural context [Lesson 2 in Andreas’ talk].

Fortunately for me, this came to light while I’m still in the middle of the design process. While not an ideal time because I’ve already done a lot of work, the opportunity to step back, adjust and try again sits in perfect reach. I’ve started reexamining our workflows, frontend and backend; it’s going to take more time, had I thought about the shortcuts I was making sooner and the impact they had on the user experience maybe I’d have less reexamining to do.

When we design for ourselves, how often do we make a compromise on something because it makes the build easier? Does our desire to just get the job done cause us to drop features that might have made the design stronger, but leaving it out meant less work in the end? If someone else was building your design, would you demand that that feature be included – even though it’s difficult to do? Does our intimate connection with the system design encourage us to continue to build in poor values? Can we learn to be more empathetic [2] in our design process when we’re designing for ourselves?

I hope I’ve encouraged you to consider what you may be missing when you design a system for yourself; what habits you’re creating that will be an influence when you design a system for another.
Cheers, Whitni


[1] Slide deck: http://bit.ly/dre_code4lib2016  Video of Talk: https://youtu.be/P03kD_Q5qcU?t=38m36s

[2] Empathy on the Edge http://bit.ly/erl17_empathyontheedge

Never Again.

Write a list of things you would never do. Because it is possible that in the next year, you will do them. —Sarah Kendzior

I, Whitni Watkins, hereby commit to the neveragain.tech pledge [pasted below]. Please stand with me and hold me to it.

Our pledge

We, the undersigned, are employees of tech organizations and companies based in the United States. We are engineers, designers, business executives, and others whose jobs include managing or processing data about people. We are choosing to stand in solidarity with Muslim Americans, immigrants, and all people whose lives and livelihoods are threatened by the incoming administration’s proposed data collection policies. We refuse to build a database of people based on their Constitutionally-protected religious beliefs. We refuse to facilitate mass deportations of people the government believes to be undesirable.

We have educated ourselves on the history of threats like these, and on the roles that technology and technologists played in carrying them out. We see how IBM collaborated to digitize and streamline the Holocaust, contributing to the deaths of six million Jews and millions of others. We recall the internment of Japanese Americans during the Second World War. We recognize that mass deportations precipitated the very atrocity the word genocide was created to describe: the murder of 1.5 million Armenians in Turkey. We acknowledge that genocides are not merely a relic of the distant past—among others, Tutsi Rwandans andBosnian Muslims have been victims in our lifetimes.

Today we stand together to say: not on our watch, and never again.

We commit to the following actions:

  • We refuse to participate in the creation of databases of identifying information for the United States government to target individuals based on race, religion, or national origin.
  • We will advocate within our organizations:
    • to minimize the collection and retention of data that would facilitate ethnic or religious targeting.
    • to scale back existing datasets with unnecessary racial, ethnic, and national origin data.
    • to responsibly destroy high-risk datasets and backups.
    • to implement security and privacy best practices, in particular, for end-to-end encryption to be the default wherever possible.
    • to demand appropriate legal process should the government request that we turn over user data collected by our organization, even in small amounts.
  • If we discover misuse of data that we consider illegal or unethical in our organizations:
    • We will work with our colleagues and leaders to correct it.
    • If we cannot stop these practices, we will exercise our rights and responsibilities to speak out publicly and engage in responsible whistleblowing without endangering users.
    • If we have the authority to do so, we will use all available legal defenses to stop these practices.
    • If we do not have such authority, and our organizations force us to engage in such misuse, we will resign from our positions rather than comply.
  • We will raise awareness and ask critical questions about the responsible and fair use of data and algorithms beyond our organization and our industry.

 

Update to the Problem: we have a solution

So a few months ago there was a problem Where I continue to admire the Problem without a complete Solution. This has been solved for a few months, but sh!t happens. Am now finally getting around to providing the a solution, one that works for me, but may not work for you. This is pieced together haphazardly, mainly so I have note of it and well so if you’re looking for help on doing something similar you have a much better starting point than I did.

After a hot minute with Python-LDAP I determined it was a beast I was not interested in taming at the moment because well I had another option, mind you at the time we thought this ‘other’ option was going to be easier. I don’t know if it was or not, but it took some serious neuron firing to do.

At one point I dove deep into VBA scripting, where I figuratively lost loads of hair and age 20 years. The script scraped hundreds of emails for a text string (unique ID), to parse out into an excel file (staying within the suite of Microsoft Office deemed significantly easier than going out) and then ran line by line on LDAP to print the output of the request attributes and then convert to csv and email to colleague to do a mail merge.

**confession** I actually signed up for a Stack Overflow account because of this!

One Solution but not THE solution:

VBScript and a Bash script that ran LDAP Queries, Python conversions and used Mutt to send an email attachment.
I’ve changed values to neutral values, you will need to update them to match what you need.
Here is my repo that provides the files you’ll need if you decide to take the VBScript route. https://github.com/whitni/VBScriptandLDAP

Once you set this up in VBA, you can set it as a macro, but I had it set as a rule at first and then switched to doing something different right after I had this solution in place…so didn’t bother.

The Solution I settled on:

I returned to a single bash script that queried LDAP matching on certain attributes, primary change was the decision to send emails out based on start dates and not when we received an email notification from HR. This will soon be turned into a cron job and I can dust my hands of it BUT walking away with significantly more knowledge of LDAP, AD, VBA and all that…

#!/bin/bash
#A simple script

## date format ##
today=$(date +”%F”)
startdate=$(date +”%Y%m”)

##backup path ##
BAK=”/path/that/you/saved/file/newhires_$today”

# LDAP Search query
ldapsearch -W -h your.AD.server.com -x -D ldapusername@email.com -b “dc=$1,dc=$2,dc=$3” -s sub “attributes you need to match or not match on” attributes you want > $BAK

#convert LDIF to csv
python LDIFtoCSV/LDIFtoCSV.py $BAK > /path/that/you/saved/file/newhires_$today.csv

#send email with file attached
mutt -a /path/that/you/saved/file/newhires_$today.csv -s “New Hire Emails” -c persontoCC@email.com — persontoemail@email.com

Resources used:
VBScript Library: https://msdn.microsoft.com/en-us/library/aa227499(v=vs.60).aspx
RegEx parsing: http://www.slipstick.com/developer/regex-parse-message-text/
LDAP Man page: [Linux_terminal] > man ldap
LDIFtoCSV conversion tool: https://github.com/tachang/ldiftocsv
Stack Overflow — my question: http://stackoverflow.com/questions/36752876/why-does-copying-a-string-from-outlook-to-excel-open-a-new-instance-of-excel-for/36754211#36754211

Where I continue to admire the Problem without a complete Solution.

Hi — it’s me again. It’s been a while because I’ve been “stuck” on what I have wanted to populate this space with. I’ve decided on for now as a loud thinking space for problems I am working on at work because writing it out helps.

A problem I run into on a daily basis is knowing what I want to accomplish, the steps to accomplish it but now knowing how to piece it together. This is where I think that fundamental training (have you) in computer science would help. I have all the pieces but I don’t have the glue. Yet. I’m learning what I need to do to solve this but there’s little structure to it, because it’s all in the moment. See also why sometimes the formal education can be helpful.

Current problem I am trying to solve: we send out new hire emails and currently that’s done manually, meaning we receive an email notice with bits of information on a new employee/intern/etc (but not their email because it may or may not have been created at the time the notice was sent). We retain X, Y, Z of that information, put it in an excel spreadsheet and then manually look the person up in our internal directory for their email (if it exists) and drop that into the excel sheet & then do a mail merge on an email template (with a sizable amount of links).

This is a LOAD of work and there is a large back log that exists and to do that manually would be 100% inefficient, expensive & a waste of time.

Current solution I have: Run an ldapsearch query for exchange accounts created equal to or greater than a certain date & are not test/dummy accounts and print the X, Y, and Z variables for all of those accounts. Then convert that data from LDIF to .csv and save to a file on the server, which I can then drop the file into local shared drive (OR send email w/ attached file) where person who does mail merge can then take the csv file and run the mail merge. Goal is to automate the mail merge in the sense of once the file is created, have a job that checks “modified date” when that changes automatically send email and then have a VB Script that can be ran to check for certain emails (this is where instead of email attachment, might be better to have the file in the local drive) or check the local shared drive folder for this file & run the mail merge on it to send the emails.

New problem I have *no* idea how to glue all of this together so it can be executed and ran from a single command. I have the ldapsearch query, I know how to print the output, I have the Perl script to convert the data from LDIF to csv, I know how to email the output file (as an attachment), I don’t have the script all the way together for the mail merge yet because I’d like to focus on and solve the first problem of getting the data because you can forget about the mail merge if you don’t have the data.

Solution? I don’t have one yet.

**UPDATE**
To say I have no idea how to glue all of this together is not completely accurate. I know I want to write a bash script because I can run all of these pieces from the command line, that was purposeful. I know that I will want to use the python-LDAP API. I know that I can (will?) use perl for the data format conversion. I know that I want to automate the emailing of the output file as a cron job. What I don’t know. Yet. is the syntax of gluing these together so that they run seamlessly from minimal effort on my part (in the end).

Resources:
Python-LDAP Applications (using the python-LDAP API) [part 1] [part 2] [part 3] [part 4]