I decided to add DigIn to my SIRLS MA because I know digital collections will play an increasingly important role in the future of libraries and archives. Prior to this course, my only real exposure to the technical side of computing came during IRLS 571 in fall semester 2009. Success in that class encouraged me to pursue further studies, which brought me to 672 this summer. I entered both classes apprehensive that I might not be able to keep up, and afraid that my classmates would be starting well ahead of me in their experience and understanding. Fortunately, my fears were largely unfounded.
I liken my experience so far to learning a new language (which is essentially what we’re doing). Eleven weeks ago I understood basic computer functions – hardware, software, networks, etc. Today that understanding has been reinforced by an additional layer; namely, the LAMP stack. Except for phpMyAdmin, I had actually heard of the other three components before this summer, but had never used them. Before 672 I had no real understanding of how digital collections were designed or implemented. I knew databases formed a critical component, but couldn’t articulate much beyond that. Today I have an elementary appreciation for how they work and the underlying architecture. Obviously, I am far from prepared to actually apply this limited knowledge to a real-world project, but I know enough to feel attuned to the language and characteristics of digital collections. I think this stuff is new enough to me that I haven’t had any major changes in perspective yet, probably because my initial perspective was so undeveloped. But, if I’ve gained a greater appreciation for one aspect, it’s database design. I’ve toyed with Access a little in the past, but it wasn’t until our units on databases that I really began to realize how complicated database design really is.
This was my first DigIn class so, of course, there’s a long way to go. And I’m not going to lie, I still feel a little apprehensive about 675 this fall. I often worry that one week I won’t get the material, and I’ll fall behind and never catch up. But, at the same time, I’m excited to continue. There’s a certain satisfaction in being able to make a computer do what you want, especially when the results are displayed in a browser. Somehow, browser displays seem more tangible. So I’m going to call this class a success, hope I don’t forget what I learned over a couple weeks break, and pick up where we left off in 675.
Wednesday, July 28, 2010
Sunday, July 25, 2010
Databases, week two
Let me first say, I like SQL. I've struggled to learn certain aspects of it, but so far, I'm enjoying learning it and probably find creating databases the most fun thing we've done in class. That said there is still plenty I need to work on.
Setting up tables and attributes using Webmin and phpMyAdmin was easy thanks to the GUI. I remain an advocate for GUI's whenever possible, and this unit served to reinforce my prejudice. The bulk of my comments, therefore, will be reserved for MySQL monitor.
As usual, the UACBT tutorials were quite helpful and easy to follow, and I found the print version from W3Schools to be useful for the dropbox assignment. Following along with the tutorials is not hard; likewise, the assignment instructions were easy to follow. The difficulty lies in translating the lessons into self-created syntax. As I started to work on the dropbox assignment, I found it difficult to construct the proper syntax from memory despite having seen it only moments before. This is, of course, like everything in life. It's easy to read a good book; much harder to write one. Fortunately, it did become easier as I proceeded through the assignment, a trend I expect will continue with practice.
Multiple table queries, in particular, remain taxing to construct correctly. And I still have only a tenuous grasp of inner, left, and right join. I understand roughly what they do, but it's still a bit fuzzy. Nonetheless, this part of the course seems very translatable to what I might do in the future, and I look forward to the possibility of creating databases for real-world applications.
Setting up tables and attributes using Webmin and phpMyAdmin was easy thanks to the GUI. I remain an advocate for GUI's whenever possible, and this unit served to reinforce my prejudice. The bulk of my comments, therefore, will be reserved for MySQL monitor.
As usual, the UACBT tutorials were quite helpful and easy to follow, and I found the print version from W3Schools to be useful for the dropbox assignment. Following along with the tutorials is not hard; likewise, the assignment instructions were easy to follow. The difficulty lies in translating the lessons into self-created syntax. As I started to work on the dropbox assignment, I found it difficult to construct the proper syntax from memory despite having seen it only moments before. This is, of course, like everything in life. It's easy to read a good book; much harder to write one. Fortunately, it did become easier as I proceeded through the assignment, a trend I expect will continue with practice.
Multiple table queries, in particular, remain taxing to construct correctly. And I still have only a tenuous grasp of inner, left, and right join. I understand roughly what they do, but it's still a bit fuzzy. Nonetheless, this part of the course seems very translatable to what I might do in the future, and I look forward to the possibility of creating databases for real-world applications.
Friday, July 16, 2010
Databases, week one
I actually found this week's topic enjoyable, even though I'm far from understanding everything. I believe I can decipher a basic ERD, although drawing one from scratch remains a bit more difficult. The one I designed for the dropbox assignment is very simple, yet I'm still unsure I included every possible relationship. Bridge tables and the "O" relationship will take some additional work to fully understand. For instance, I think all of the tables in my ERD can stand alone (a country, attraction, and photographer can exist independently of each other), yet I hesitated to include any "O's" in my diagram. Not sure why...
Level one normalization is pretty easy; level three I'm still not clear on. In fact, levels two and three look almost the same to me. Both normalization and ERD are manageable with simple databases of 3-4 tables, but one can easily imagine how difficult relationship diagrams and normalization will be for more complicated databases.
Same thing with SQL... basics are easy, but to do any "heavy lifting" is going to require time, patience, and practice. The tutorials (especially the UACBT videos) are great, but I'm a long way from sitting down and creating a mature database from scratch. There's nothing surprising here - the basics are easy for most things in life, while achieving demonstrable proficiency requires greater dedication. Fortunately, I find the idea of creating databases for digital collections intriguing, and might pursue something of this nature for my Capstone project.
Level one normalization is pretty easy; level three I'm still not clear on. In fact, levels two and three look almost the same to me. Both normalization and ERD are manageable with simple databases of 3-4 tables, but one can easily imagine how difficult relationship diagrams and normalization will be for more complicated databases.
Same thing with SQL... basics are easy, but to do any "heavy lifting" is going to require time, patience, and practice. The tutorials (especially the UACBT videos) are great, but I'm a long way from sitting down and creating a mature database from scratch. There's nothing surprising here - the basics are easy for most things in life, while achieving demonstrable proficiency requires greater dedication. Fortunately, I find the idea of creating databases for digital collections intriguing, and might pursue something of this nature for my Capstone project.
Sunday, July 11, 2010
Technology plans: the simpler, the better.
I wish to comment on three articles from this week concerning technology plans; namely, the Whittaker, Chabrow, and Schuyler articles. I found each interesting for different reasons, and will briefly relate what I took away from each.
First, the Whittaker article. I don't dispute that many technology implementations fail to materialize, but I found the research methodology in this article suspect. Obviously, wasted time and resources on poorly planned technology initiatives are a major issue, however, this article left me unconvinced that it's taken very seriously by many institutions, particularly large ones. For example, only 14% of the research surveys were returned? That is not a very inspiring number. Does that mean 86% of the recipients don't think it's a serious issue or problem, and so didn't take the time to participate? Also, 1450 surveys were sent, but only 176 "arrived in time to be analyzed for this report". So, really, only 12% of the surveys sent were used. Apparently, of these, 61% reported a failed IT initiative, but I wonder if that failure provided motivation to participate in the survey. As anyone with customer service experience knows, customers are much more likely to share a bad experience than a good one.
Also, the survey was sent to "chief executives" but many of the respondent comments blame upper management for IT failures. Would chief executives really blame themselves for the failures? I doubt it. I think the surveys were passed to other (unidentified) parties for completion. Bottom line - I didn't find this article very persuasive in convincing me that most institutions are distressed by the success rate of their IT projects.
Even though the Chabrow article focused on government IT plans, I took interest in several points made in the article. First, the idea that it was preferable to "fail fast" on IT projects that appear to be off-track. Recognition that a plan isn't working, and taking steps to quickly change or abandon it, will save time and money and, I believe, is good advice. I believe there should be no sacred-cows with IT projects. Don't throw good money, or good time, after bad. Admit it's not working, re-evaluate the need and plan, and change or dump it.
Chabrow cites frequent changes in management as one issue that can lead to failed IT projects. Managers inheriting a project they weren't originally involved in planning is a recipe for setbacks and failure. The risks are obvious: lack of interest, wanting to make changes, different priorities, etc. Continuity of management and staff is critical for success, particularly for long-term projects.
One thing that can help raise the success rate of projects is to implement them with "incremental steps and rollouts that deliver benefits along the way". I think this is great advice, especially for large projects. Small rollouts are less likely to meet with problems or resistance from staff, and allow for small successes over shorter time periods than waiting years for some big project to reach completion.
Finally, I really enjoyed the Schuyler article. His assertion that technology plans are a "political document" is true. They are often implemented by upper management because they're necessary for grant applications, but often are created without input from the IT department. Often the authors of technology plans aren't the same people that actually implement them. And his advice that technology plans are best kept vague rings true. Many technology plans look years ahead, but things happen. Recessions happen. Technology changes. Needs change. Overly specific technology plans are exactly the ones most likely to fail.
Eventually I hope to have the opportunity to contribute to a technology plan. At least, I'll have to read them, because my future career will probably involve some grant writing. In a sense, technology plans can seem like a necessary evil - a loop one has to go through to obtain funding for projects. As I mentioned above, I really think keeping them flexible and vague is good advice. As quickly as technology and business needs change, I think the most useful technology plan is a flexible one.
First, the Whittaker article. I don't dispute that many technology implementations fail to materialize, but I found the research methodology in this article suspect. Obviously, wasted time and resources on poorly planned technology initiatives are a major issue, however, this article left me unconvinced that it's taken very seriously by many institutions, particularly large ones. For example, only 14% of the research surveys were returned? That is not a very inspiring number. Does that mean 86% of the recipients don't think it's a serious issue or problem, and so didn't take the time to participate? Also, 1450 surveys were sent, but only 176 "arrived in time to be analyzed for this report". So, really, only 12% of the surveys sent were used. Apparently, of these, 61% reported a failed IT initiative, but I wonder if that failure provided motivation to participate in the survey. As anyone with customer service experience knows, customers are much more likely to share a bad experience than a good one.
Also, the survey was sent to "chief executives" but many of the respondent comments blame upper management for IT failures. Would chief executives really blame themselves for the failures? I doubt it. I think the surveys were passed to other (unidentified) parties for completion. Bottom line - I didn't find this article very persuasive in convincing me that most institutions are distressed by the success rate of their IT projects.
Even though the Chabrow article focused on government IT plans, I took interest in several points made in the article. First, the idea that it was preferable to "fail fast" on IT projects that appear to be off-track. Recognition that a plan isn't working, and taking steps to quickly change or abandon it, will save time and money and, I believe, is good advice. I believe there should be no sacred-cows with IT projects. Don't throw good money, or good time, after bad. Admit it's not working, re-evaluate the need and plan, and change or dump it.
Chabrow cites frequent changes in management as one issue that can lead to failed IT projects. Managers inheriting a project they weren't originally involved in planning is a recipe for setbacks and failure. The risks are obvious: lack of interest, wanting to make changes, different priorities, etc. Continuity of management and staff is critical for success, particularly for long-term projects.
One thing that can help raise the success rate of projects is to implement them with "incremental steps and rollouts that deliver benefits along the way". I think this is great advice, especially for large projects. Small rollouts are less likely to meet with problems or resistance from staff, and allow for small successes over shorter time periods than waiting years for some big project to reach completion.
Finally, I really enjoyed the Schuyler article. His assertion that technology plans are a "political document" is true. They are often implemented by upper management because they're necessary for grant applications, but often are created without input from the IT department. Often the authors of technology plans aren't the same people that actually implement them. And his advice that technology plans are best kept vague rings true. Many technology plans look years ahead, but things happen. Recessions happen. Technology changes. Needs change. Overly specific technology plans are exactly the ones most likely to fail.
Eventually I hope to have the opportunity to contribute to a technology plan. At least, I'll have to read them, because my future career will probably involve some grant writing. In a sense, technology plans can seem like a necessary evil - a loop one has to go through to obtain funding for projects. As I mentioned above, I really think keeping them flexible and vague is good advice. As quickly as technology and business needs change, I think the most useful technology plan is a flexible one.
Sunday, July 4, 2010
XML - same as HTML, only different
I proceeded to familiarize myself with XML this week using the tools recommended in the assignment - namely, the Wikipedia articles, w3schools.com tutorials, and the Mark Long Introduction to XML videos. Last week's lesson on HTML certainly made XML easier because the languages are similar. Each of the tools mentioned above were helpful. In fact, at this early stage, I imagine any tool would be useful to a novice such as myself.
As mentioned before in this course, the Wikipedia articles, while generally current and thoughtful, often incorporate more detail than I'm prepared to appreciate at this point. Therefore, I often read the first third or half of each article to understand the basics, and usually find myself glazing over by the end. On the bright side, I do understand more of each article than I would have in early May, so I'm definitely learning (albeit slowly).
The w3schools and Mark Long videos were quite helpful. The w3 lessons work because you read them at your own pace, and are easy to go back to. Plus, they're typically concise and to-the-point. The w3 lessons provided a good foundation into the Long videos, which are a bit more detailed. I have not yet viewed the DTD and Schemas sections of the videos, however, but plan to return to these later. Part of my hesitation is simply that I don't understand the difference between these, although I'm sure the videos will help clarify my cloudiness.
The actual XML document was not hard to write. I did run into a minor problem trying to code a URL into an element, but that was resolved by using '& amp ;' in place of an &. I am curious how to code an actual URL link into XML, like "a href" for HTML. I haven't yet found how to do this, although once I delve deeper into the tutorials my question will likely be answered.
As mentioned before in this course, the Wikipedia articles, while generally current and thoughtful, often incorporate more detail than I'm prepared to appreciate at this point. Therefore, I often read the first third or half of each article to understand the basics, and usually find myself glazing over by the end. On the bright side, I do understand more of each article than I would have in early May, so I'm definitely learning (albeit slowly).
The w3schools and Mark Long videos were quite helpful. The w3 lessons work because you read them at your own pace, and are easy to go back to. Plus, they're typically concise and to-the-point. The w3 lessons provided a good foundation into the Long videos, which are a bit more detailed. I have not yet viewed the DTD and Schemas sections of the videos, however, but plan to return to these later. Part of my hesitation is simply that I don't understand the difference between these, although I'm sure the videos will help clarify my cloudiness.
The actual XML document was not hard to write. I did run into a minor problem trying to code a URL into an element, but that was resolved by using '& amp ;' in place of an &. I am curious how to code an actual URL link into XML, like "a href" for HTML. I haven't yet found how to do this, although once I delve deeper into the tutorials my question will likely be answered.
Subscribe to:
Posts (Atom)