The readings for Unit 12 represent the first experience I've knowingly had with project management. I say "knowingly" because I held corporate jobs for over 10 years before applying to SIRLS, and during that time lived through some technology changes at work. So, obviously, I was on the user end of some project management decisions without really being aware of the underlying process. Now that I see how time-consuming and involved those projects are, I have a better appreciation for the work required to bring new projects to fruition. Looking back, I remember that most of the time employees were irritated when new technologies were rolled out. Employees were used to the existing process, didn't always feel like learning a new system, and were sometimes slow to see the benefits of the new technology. After all that work, I can just imagine the IT people thinking, "What a bunch of ungrateful bastards!"
The reading that I valued the most this week was Cervone's "How not to run a digital library project". Because this subject is new to me, I found his list of "rules" to be a good start for how to view, and approach, project management. Rule #1 struck me almost immediately. Cervone's comment that, "Even when requirements are gathered, they are ignored in the belief that the project team does, in fact, know better what the end user wants," reminded me of a couple instances at work when new systems failed to efficiently do what employees required. For example, I remember a new software program that hid a frequently used function in a location that was cumbersome to access (3 or 4 clicks instead of 2). This happened because the project team was unaware how we, the employees, used that function.
I also like the WAG example in Rule #4. I don't remember seeing this acronym before, but I appreciate it's meaning and understand the pitfalls of using a WAG for planning purposes. On the surface the difference between "effort" and "duration" appears subtle, but it's an important distinction when estimating the time, and cost, of a project.
Another favorite was Rule #8. I certainly see how "scope creep" can quickly undermine a project, leading to cost overruns, time delays, loss of focus, and confusion regarding end user needs. I imagine that the more ambitious the project (and the more people involved), the greater the risk of "scope creep". Cervone's advice to "be flexible, but firm" is well taken. Obviously, flexibility is necessary because some changes are inevitable with big projects, but stray too far, and the final product may please no one. Therefore, if major changes are required, returning to the drawing board to draft a new plan may be warranted.
Monday, August 2, 2010
Wednesday, July 28, 2010
The End Complete
I decided to add DigIn to my SIRLS MA because I know digital collections will play an increasingly important role in the future of libraries and archives. Prior to this course, my only real exposure to the technical side of computing came during IRLS 571 in fall semester 2009. Success in that class encouraged me to pursue further studies, which brought me to 672 this summer. I entered both classes apprehensive that I might not be able to keep up, and afraid that my classmates would be starting well ahead of me in their experience and understanding. Fortunately, my fears were largely unfounded.
I liken my experience so far to learning a new language (which is essentially what we’re doing). Eleven weeks ago I understood basic computer functions – hardware, software, networks, etc. Today that understanding has been reinforced by an additional layer; namely, the LAMP stack. Except for phpMyAdmin, I had actually heard of the other three components before this summer, but had never used them. Before 672 I had no real understanding of how digital collections were designed or implemented. I knew databases formed a critical component, but couldn’t articulate much beyond that. Today I have an elementary appreciation for how they work and the underlying architecture. Obviously, I am far from prepared to actually apply this limited knowledge to a real-world project, but I know enough to feel attuned to the language and characteristics of digital collections. I think this stuff is new enough to me that I haven’t had any major changes in perspective yet, probably because my initial perspective was so undeveloped. But, if I’ve gained a greater appreciation for one aspect, it’s database design. I’ve toyed with Access a little in the past, but it wasn’t until our units on databases that I really began to realize how complicated database design really is.
This was my first DigIn class so, of course, there’s a long way to go. And I’m not going to lie, I still feel a little apprehensive about 675 this fall. I often worry that one week I won’t get the material, and I’ll fall behind and never catch up. But, at the same time, I’m excited to continue. There’s a certain satisfaction in being able to make a computer do what you want, especially when the results are displayed in a browser. Somehow, browser displays seem more tangible. So I’m going to call this class a success, hope I don’t forget what I learned over a couple weeks break, and pick up where we left off in 675.
I liken my experience so far to learning a new language (which is essentially what we’re doing). Eleven weeks ago I understood basic computer functions – hardware, software, networks, etc. Today that understanding has been reinforced by an additional layer; namely, the LAMP stack. Except for phpMyAdmin, I had actually heard of the other three components before this summer, but had never used them. Before 672 I had no real understanding of how digital collections were designed or implemented. I knew databases formed a critical component, but couldn’t articulate much beyond that. Today I have an elementary appreciation for how they work and the underlying architecture. Obviously, I am far from prepared to actually apply this limited knowledge to a real-world project, but I know enough to feel attuned to the language and characteristics of digital collections. I think this stuff is new enough to me that I haven’t had any major changes in perspective yet, probably because my initial perspective was so undeveloped. But, if I’ve gained a greater appreciation for one aspect, it’s database design. I’ve toyed with Access a little in the past, but it wasn’t until our units on databases that I really began to realize how complicated database design really is.
This was my first DigIn class so, of course, there’s a long way to go. And I’m not going to lie, I still feel a little apprehensive about 675 this fall. I often worry that one week I won’t get the material, and I’ll fall behind and never catch up. But, at the same time, I’m excited to continue. There’s a certain satisfaction in being able to make a computer do what you want, especially when the results are displayed in a browser. Somehow, browser displays seem more tangible. So I’m going to call this class a success, hope I don’t forget what I learned over a couple weeks break, and pick up where we left off in 675.
Sunday, July 25, 2010
Databases, week two
Let me first say, I like SQL. I've struggled to learn certain aspects of it, but so far, I'm enjoying learning it and probably find creating databases the most fun thing we've done in class. That said there is still plenty I need to work on.
Setting up tables and attributes using Webmin and phpMyAdmin was easy thanks to the GUI. I remain an advocate for GUI's whenever possible, and this unit served to reinforce my prejudice. The bulk of my comments, therefore, will be reserved for MySQL monitor.
As usual, the UACBT tutorials were quite helpful and easy to follow, and I found the print version from W3Schools to be useful for the dropbox assignment. Following along with the tutorials is not hard; likewise, the assignment instructions were easy to follow. The difficulty lies in translating the lessons into self-created syntax. As I started to work on the dropbox assignment, I found it difficult to construct the proper syntax from memory despite having seen it only moments before. This is, of course, like everything in life. It's easy to read a good book; much harder to write one. Fortunately, it did become easier as I proceeded through the assignment, a trend I expect will continue with practice.
Multiple table queries, in particular, remain taxing to construct correctly. And I still have only a tenuous grasp of inner, left, and right join. I understand roughly what they do, but it's still a bit fuzzy. Nonetheless, this part of the course seems very translatable to what I might do in the future, and I look forward to the possibility of creating databases for real-world applications.
Setting up tables and attributes using Webmin and phpMyAdmin was easy thanks to the GUI. I remain an advocate for GUI's whenever possible, and this unit served to reinforce my prejudice. The bulk of my comments, therefore, will be reserved for MySQL monitor.
As usual, the UACBT tutorials were quite helpful and easy to follow, and I found the print version from W3Schools to be useful for the dropbox assignment. Following along with the tutorials is not hard; likewise, the assignment instructions were easy to follow. The difficulty lies in translating the lessons into self-created syntax. As I started to work on the dropbox assignment, I found it difficult to construct the proper syntax from memory despite having seen it only moments before. This is, of course, like everything in life. It's easy to read a good book; much harder to write one. Fortunately, it did become easier as I proceeded through the assignment, a trend I expect will continue with practice.
Multiple table queries, in particular, remain taxing to construct correctly. And I still have only a tenuous grasp of inner, left, and right join. I understand roughly what they do, but it's still a bit fuzzy. Nonetheless, this part of the course seems very translatable to what I might do in the future, and I look forward to the possibility of creating databases for real-world applications.
Friday, July 16, 2010
Databases, week one
I actually found this week's topic enjoyable, even though I'm far from understanding everything. I believe I can decipher a basic ERD, although drawing one from scratch remains a bit more difficult. The one I designed for the dropbox assignment is very simple, yet I'm still unsure I included every possible relationship. Bridge tables and the "O" relationship will take some additional work to fully understand. For instance, I think all of the tables in my ERD can stand alone (a country, attraction, and photographer can exist independently of each other), yet I hesitated to include any "O's" in my diagram. Not sure why...
Level one normalization is pretty easy; level three I'm still not clear on. In fact, levels two and three look almost the same to me. Both normalization and ERD are manageable with simple databases of 3-4 tables, but one can easily imagine how difficult relationship diagrams and normalization will be for more complicated databases.
Same thing with SQL... basics are easy, but to do any "heavy lifting" is going to require time, patience, and practice. The tutorials (especially the UACBT videos) are great, but I'm a long way from sitting down and creating a mature database from scratch. There's nothing surprising here - the basics are easy for most things in life, while achieving demonstrable proficiency requires greater dedication. Fortunately, I find the idea of creating databases for digital collections intriguing, and might pursue something of this nature for my Capstone project.
Level one normalization is pretty easy; level three I'm still not clear on. In fact, levels two and three look almost the same to me. Both normalization and ERD are manageable with simple databases of 3-4 tables, but one can easily imagine how difficult relationship diagrams and normalization will be for more complicated databases.
Same thing with SQL... basics are easy, but to do any "heavy lifting" is going to require time, patience, and practice. The tutorials (especially the UACBT videos) are great, but I'm a long way from sitting down and creating a mature database from scratch. There's nothing surprising here - the basics are easy for most things in life, while achieving demonstrable proficiency requires greater dedication. Fortunately, I find the idea of creating databases for digital collections intriguing, and might pursue something of this nature for my Capstone project.
Sunday, July 11, 2010
Technology plans: the simpler, the better.
I wish to comment on three articles from this week concerning technology plans; namely, the Whittaker, Chabrow, and Schuyler articles. I found each interesting for different reasons, and will briefly relate what I took away from each.
First, the Whittaker article. I don't dispute that many technology implementations fail to materialize, but I found the research methodology in this article suspect. Obviously, wasted time and resources on poorly planned technology initiatives are a major issue, however, this article left me unconvinced that it's taken very seriously by many institutions, particularly large ones. For example, only 14% of the research surveys were returned? That is not a very inspiring number. Does that mean 86% of the recipients don't think it's a serious issue or problem, and so didn't take the time to participate? Also, 1450 surveys were sent, but only 176 "arrived in time to be analyzed for this report". So, really, only 12% of the surveys sent were used. Apparently, of these, 61% reported a failed IT initiative, but I wonder if that failure provided motivation to participate in the survey. As anyone with customer service experience knows, customers are much more likely to share a bad experience than a good one.
Also, the survey was sent to "chief executives" but many of the respondent comments blame upper management for IT failures. Would chief executives really blame themselves for the failures? I doubt it. I think the surveys were passed to other (unidentified) parties for completion. Bottom line - I didn't find this article very persuasive in convincing me that most institutions are distressed by the success rate of their IT projects.
Even though the Chabrow article focused on government IT plans, I took interest in several points made in the article. First, the idea that it was preferable to "fail fast" on IT projects that appear to be off-track. Recognition that a plan isn't working, and taking steps to quickly change or abandon it, will save time and money and, I believe, is good advice. I believe there should be no sacred-cows with IT projects. Don't throw good money, or good time, after bad. Admit it's not working, re-evaluate the need and plan, and change or dump it.
Chabrow cites frequent changes in management as one issue that can lead to failed IT projects. Managers inheriting a project they weren't originally involved in planning is a recipe for setbacks and failure. The risks are obvious: lack of interest, wanting to make changes, different priorities, etc. Continuity of management and staff is critical for success, particularly for long-term projects.
One thing that can help raise the success rate of projects is to implement them with "incremental steps and rollouts that deliver benefits along the way". I think this is great advice, especially for large projects. Small rollouts are less likely to meet with problems or resistance from staff, and allow for small successes over shorter time periods than waiting years for some big project to reach completion.
Finally, I really enjoyed the Schuyler article. His assertion that technology plans are a "political document" is true. They are often implemented by upper management because they're necessary for grant applications, but often are created without input from the IT department. Often the authors of technology plans aren't the same people that actually implement them. And his advice that technology plans are best kept vague rings true. Many technology plans look years ahead, but things happen. Recessions happen. Technology changes. Needs change. Overly specific technology plans are exactly the ones most likely to fail.
Eventually I hope to have the opportunity to contribute to a technology plan. At least, I'll have to read them, because my future career will probably involve some grant writing. In a sense, technology plans can seem like a necessary evil - a loop one has to go through to obtain funding for projects. As I mentioned above, I really think keeping them flexible and vague is good advice. As quickly as technology and business needs change, I think the most useful technology plan is a flexible one.
First, the Whittaker article. I don't dispute that many technology implementations fail to materialize, but I found the research methodology in this article suspect. Obviously, wasted time and resources on poorly planned technology initiatives are a major issue, however, this article left me unconvinced that it's taken very seriously by many institutions, particularly large ones. For example, only 14% of the research surveys were returned? That is not a very inspiring number. Does that mean 86% of the recipients don't think it's a serious issue or problem, and so didn't take the time to participate? Also, 1450 surveys were sent, but only 176 "arrived in time to be analyzed for this report". So, really, only 12% of the surveys sent were used. Apparently, of these, 61% reported a failed IT initiative, but I wonder if that failure provided motivation to participate in the survey. As anyone with customer service experience knows, customers are much more likely to share a bad experience than a good one.
Also, the survey was sent to "chief executives" but many of the respondent comments blame upper management for IT failures. Would chief executives really blame themselves for the failures? I doubt it. I think the surveys were passed to other (unidentified) parties for completion. Bottom line - I didn't find this article very persuasive in convincing me that most institutions are distressed by the success rate of their IT projects.
Even though the Chabrow article focused on government IT plans, I took interest in several points made in the article. First, the idea that it was preferable to "fail fast" on IT projects that appear to be off-track. Recognition that a plan isn't working, and taking steps to quickly change or abandon it, will save time and money and, I believe, is good advice. I believe there should be no sacred-cows with IT projects. Don't throw good money, or good time, after bad. Admit it's not working, re-evaluate the need and plan, and change or dump it.
Chabrow cites frequent changes in management as one issue that can lead to failed IT projects. Managers inheriting a project they weren't originally involved in planning is a recipe for setbacks and failure. The risks are obvious: lack of interest, wanting to make changes, different priorities, etc. Continuity of management and staff is critical for success, particularly for long-term projects.
One thing that can help raise the success rate of projects is to implement them with "incremental steps and rollouts that deliver benefits along the way". I think this is great advice, especially for large projects. Small rollouts are less likely to meet with problems or resistance from staff, and allow for small successes over shorter time periods than waiting years for some big project to reach completion.
Finally, I really enjoyed the Schuyler article. His assertion that technology plans are a "political document" is true. They are often implemented by upper management because they're necessary for grant applications, but often are created without input from the IT department. Often the authors of technology plans aren't the same people that actually implement them. And his advice that technology plans are best kept vague rings true. Many technology plans look years ahead, but things happen. Recessions happen. Technology changes. Needs change. Overly specific technology plans are exactly the ones most likely to fail.
Eventually I hope to have the opportunity to contribute to a technology plan. At least, I'll have to read them, because my future career will probably involve some grant writing. In a sense, technology plans can seem like a necessary evil - a loop one has to go through to obtain funding for projects. As I mentioned above, I really think keeping them flexible and vague is good advice. As quickly as technology and business needs change, I think the most useful technology plan is a flexible one.
Sunday, July 4, 2010
XML - same as HTML, only different
I proceeded to familiarize myself with XML this week using the tools recommended in the assignment - namely, the Wikipedia articles, w3schools.com tutorials, and the Mark Long Introduction to XML videos. Last week's lesson on HTML certainly made XML easier because the languages are similar. Each of the tools mentioned above were helpful. In fact, at this early stage, I imagine any tool would be useful to a novice such as myself.
As mentioned before in this course, the Wikipedia articles, while generally current and thoughtful, often incorporate more detail than I'm prepared to appreciate at this point. Therefore, I often read the first third or half of each article to understand the basics, and usually find myself glazing over by the end. On the bright side, I do understand more of each article than I would have in early May, so I'm definitely learning (albeit slowly).
The w3schools and Mark Long videos were quite helpful. The w3 lessons work because you read them at your own pace, and are easy to go back to. Plus, they're typically concise and to-the-point. The w3 lessons provided a good foundation into the Long videos, which are a bit more detailed. I have not yet viewed the DTD and Schemas sections of the videos, however, but plan to return to these later. Part of my hesitation is simply that I don't understand the difference between these, although I'm sure the videos will help clarify my cloudiness.
The actual XML document was not hard to write. I did run into a minor problem trying to code a URL into an element, but that was resolved by using '& amp ;' in place of an &. I am curious how to code an actual URL link into XML, like "a href" for HTML. I haven't yet found how to do this, although once I delve deeper into the tutorials my question will likely be answered.
As mentioned before in this course, the Wikipedia articles, while generally current and thoughtful, often incorporate more detail than I'm prepared to appreciate at this point. Therefore, I often read the first third or half of each article to understand the basics, and usually find myself glazing over by the end. On the bright side, I do understand more of each article than I would have in early May, so I'm definitely learning (albeit slowly).
The w3schools and Mark Long videos were quite helpful. The w3 lessons work because you read them at your own pace, and are easy to go back to. Plus, they're typically concise and to-the-point. The w3 lessons provided a good foundation into the Long videos, which are a bit more detailed. I have not yet viewed the DTD and Schemas sections of the videos, however, but plan to return to these later. Part of my hesitation is simply that I don't understand the difference between these, although I'm sure the videos will help clarify my cloudiness.
The actual XML document was not hard to write. I did run into a minor problem trying to code a URL into an element, but that was resolved by using '& amp ;' in place of an &. I am curious how to code an actual URL link into XML, like "a href" for HTML. I haven't yet found how to do this, although once I delve deeper into the tutorials my question will likely be answered.
Sunday, June 27, 2010
HTML's not so bad...
My first real experience with HTML came during 504 last summer. I am certainly still a novice, however, to this point find HTML fairly easy to work with and not intimidating. Of course, I'm basing this on the very simple websites we've been required to produce to this point, so my rosy assessment could quickly change if future assignments involve complicated HTML coding. But, so far at least, I'm having some fun with it.
This week I focused on reviewing the Powerpoint from 504, and followed it in creating my unit 6 web page (which looks quite similar to my 504 page). So far in SIRLS, I've been required to produce a web page about every 6 months. This is often enough to remember some basics, but too infrequent to become comfortable with the process - particularly in posting them to the U-System account. I'm sure DigIn will afford many future opportunities to create websites, so the process will undoubtedly become more familiar. Right now, for some reason, I always lack confidence that the transfer of files to the U-System account will go smoothly, and fear the page will be missing elements that are present when viewing the document during creation. Images, especially, I'm afraid won't be transferred properly and I'll be left with a page full of the dreaded "X" symbol.
I also viewed lessons from the w3schools.com website. These are helpful, easy to understand, valuable for reinforcing what I already know, and present new concepts in a manner that is accessible to the layperson. So far I'm sticking to the basic lessons, but plan to revisit the more advanced ones as we proceed through the course. A couple little things surprised me. For instance, future versions of HTML won't allow you to skip certain end tags that can be missing now (although it's not recommended). This only surprised me because I imagined rules might become more flexible as the code evolved, not less. Also, I'm still not clear on the differences between HTML, XHTML, and XML. From what I've seen, the code for each looks fairly similar. I believe XML prioritizes data content over style, although this is certainly an oversimplification.
This week I focused on reviewing the Powerpoint from 504, and followed it in creating my unit 6 web page (which looks quite similar to my 504 page). So far in SIRLS, I've been required to produce a web page about every 6 months. This is often enough to remember some basics, but too infrequent to become comfortable with the process - particularly in posting them to the U-System account. I'm sure DigIn will afford many future opportunities to create websites, so the process will undoubtedly become more familiar. Right now, for some reason, I always lack confidence that the transfer of files to the U-System account will go smoothly, and fear the page will be missing elements that are present when viewing the document during creation. Images, especially, I'm afraid won't be transferred properly and I'll be left with a page full of the dreaded "X" symbol.
I also viewed lessons from the w3schools.com website. These are helpful, easy to understand, valuable for reinforcing what I already know, and present new concepts in a manner that is accessible to the layperson. So far I'm sticking to the basic lessons, but plan to revisit the more advanced ones as we proceed through the course. A couple little things surprised me. For instance, future versions of HTML won't allow you to skip certain end tags that can be missing now (although it's not recommended). This only surprised me because I imagined rules might become more flexible as the code evolved, not less. Also, I'm still not clear on the differences between HTML, XHTML, and XML. From what I've seen, the code for each looks fairly similar. I believe XML prioritizes data content over style, although this is certainly an oversimplification.
Subscribe to:
Posts (Atom)