Friday, May 29, 2009
Sunday, May 24, 2009
Friday, May 22, 2009
* Revisions and Additions to the Core List of ICT Indicators. Partnership on Measuring ICT for Development, 2009;
* Manual for Measuring ICT Access and Use by Households and Individuals, International Telecommunication Union, 2009;
* Manual for the Production of Statistics on the Information Economy, revised edition, UNCTAD, 2009;
* Measuring ICT: the Global Status of ICT Indicators, Partnership on Measuring ICT for Development, 2005;
* Measuring the Information Society: The ICT Development Index, International Telecommunication Union, 2009;
* Report of the Partnership on Measuring Information and Communication Technologies for Development: Information and Communications Technology Statistics, Economic and Social Council, 2009. The Seminar provided a platform for national experts, policymakers, practitioners and stakeholders to discuss ICT indicators and topics that are important to national policymaking. The following suggestions were made to improve the availability of ICT statistics in India:
* Harmonising and scaling up statistics available at ministries, national statistical offices and other agencies;
* Bridging the data gap between available statistics and those required by the Revised Core List of ICT Indicators;
* Adapting international statistical tools and guidelines related to gathering, analysing and presenting statistical data;
* Building the capacities at the national level in order to maintain the quality and the reliability of data.
T. V. Padma
[NEW DELHI] An Indian online forum dealing with intellectual property rights has launched a petition to Indian patent authorities, calling for more transparency in the country's patent system and for information to be more easily accessible.The online petition was launched this week (28 April). It follows an earlier petition submitted at the end of 2007 following which Indian patent authorities said that the complete database with searchable patent information, including patent specifications and decisions, would be available online by March 2009.But the deadline was not been met, prompting the new petition, Shamnad Basheer — a professor in intellectual property law at the National University of Juridical Sciences, Kolkata, who initiated the petition — told SciDev.Net.The second petition calls for more patent-related information to be made public. This includes all correspondence between a patent applicant and the patent office; clear patent titles and abstracts; patent office circulars that impact patentability; corresponding patent applications elsewhere; and amendments made by the applicant from time to time to address issues raised by opponents challenging a patent.A key piece of information being sought by the petition relates to 'working' statements — whether a firm that has been granted a patent for a drug is actually making the drug — which are supposed to be filed by the patentee with the Indian patent office. According to Indian patent laws a firm that has been granted a patent for a drug in India must also make it in India for the next three years. Otherwise the drug is eligible for compulsory licensing.This information is often withheld while filing an application, says Basheer. "Since most of the drug patents in India are by MNCs [multinational corporations] and many of these patents are not 'worked' in India (the patented drug is not manufactured in India, but only imported into India), many of these patents become susceptible to compulsory licenses," he says. The groups are also requesting that the Indian government build public-private partnerships with the ICT sector in India to build a better e-filing system and other innovative ICT tools to aid a more efficient administration of the Indian patent office.Basheer says the petition attracted 100 signatories after the first day, including patent attorneys, pharmaceutical companies, students from the Carnegie Mellon Institute and the Max Planck Institute, and global not-for-profit organisations such as the Initiative for Medicines, Access and Knowledge that promotes technical assistance on IPR issues to governments, campaigns against unsound pharmaceutical patents and promotes access to drugs.
Saturday, May 16, 2009
Fedora Commons and the DSpace Foundation, two of the largest providers of open source software for managing and providing access to digital content, have announced today that they will join their organizations to pursue a common mission. Jointly, they will provide leadership and innovation in open source technologies for global communities who manage, preserve, and provide access to digital content.
The joined organization, named "DuraSpace," will sustain and grow its flagship repository platforms - Fedora and DSpace. DuraSpace will also expand its portfolio by offering new technologies and services that respond to the dynamic environment of the Web and to new requirements from existing and future users. DuraSpace will focus on supporting existing communities and will also engage a larger and more diverse group of stakeholders in support of its not-for-profit mission. The organization will be led by an executive team consisting of Sandy Payette (Chief Executive Officer), Michele Kimpton (Chief Business Officer), and Brad McLean (Chief Technology Officer) and will operate out of offices in Ithaca, NY and Cambridge, MA.
"This is a great development," said Clifford Lynch, Executive Director of the Coalition for Networked Information (CNI). "It will focus resources and talent in a way that should really accelerate progress in areas critical to the research, education, and cultural memory communities. The new emphasis on distributed reliable storage infrastructure services and their integration with repositories is particularly timely."
Together Fedora and DSpace make up the largest market share of open repositories worldwide, serving over 700 institutions. These include organizations committed to the use of open source software solutions for the dissemination and preservation of academic, scientific, and cultural digital content.
"The joining of DSpace and Fedora Commons is a watershed event for libraries, specifically, and higher education, more generally," said James Hilton, CIO of the University of Virginia. "Separately, these two organizations operated with similar missions and a shared commitment to developing and supporting open technologies. By bringing together the technical, financial, and community-based resources of the two organizations, their communities gain a robust organization focused on solving the many challenges involved in storing, curating, and preserving digital data and scholarship," he said.
DuraSpace will continue to support its existing software platforms, DSpace and Fedora, as well as expand its offerings to support the needs of global information communities. The first new technology to emerge will be a Web-based service named "DuraCloud." DuraCloud is a hosted service that takes advantage of the cost efficiencies of cloud storage and cloud computing, while adding value to help ensure longevity and re-use of digital content. The DuraSpace organization is developing partnerships with commercial cloud providers who offer both storage and computing capabilities.
The DuraCloud service will be run by the DuraSpace organization. Its target audiences are organizations responsible for digital preservation and groups creating shared spaces for access and re-use of digital content. DuraCloud will be accessible directly as a Web service and also via plug-ins to digital repositories including Fedora and DSpace. The software developed to support the DuraCloud service will be made available as open source. An early release of DuraCloud will be available for selected pilot partners in Fall 2009.
Key Benefits of the DuraSpace Organization
DuraSpace will support both DSpace and Fedora by working closely with both communities and when possible, develop synergistic technologies, services, and programs that increase interoperability of the two platforms. DuraSpace will also support other open source software projects including the Mulgara semantic store, a scalable RDF database.
DuraSpace is mission-focused. The organization will be associated with its broader mission of working towards developing services and solutions on behalf of diverse communities rather than focusing on single-solution product development. This change in orientation can be characterized as moving beyond the software and toward the mission.
DuraSpace will bring strength and leadership to a larger community and amplify the value brought by each organization individually. With both organizations working in unison, there can be significant economies of scale, synergies in developing open technologies and services, and a strong position for long-term sustainability.
Learn More about DuraSpace
DuraSpace will be represented at the Fourth Annual International Conference on Open Repositories (http://openrepositories.org/). Please check the schedule and visit the Fedora Commons and DSpace information tables at the conference to learn more. Also, initial information will be available at the DuraSpace website, with more information forthcoming in June 2009.
About Fedora Commons
Fedora Commons (http://fedora-commons.org/) was established in 2007 as a not-for-profit organization and the home of the Fedora repository software and related open source projects. Fedora is a robust, integrated, repository system that enables storage, access and management for virtually any kind of digital content. The Flexible Extensible Digital Object Repository Architecture (Fedora) was originally designed by Sandy Payette and colleagues at Cornell University and was established as an open source project in 2001 by Cornell and the University of Virginia. Fedora has a large international user community and is installed worldwide at universities, libraries, research institutions, cultural organizations, and corporations.
About DSpace Foundation
The DSpace Foundation (http://dspace.org/) was formed in 2007 to support the growing global community of institutions using DSpace open source software to manage scholarly works in a digital repository. DSpace was jointly developed in 2002 by Hewlett-Packard and the MIT Libraries. Today, there are over more than 500 organizations worldwide using the software to manage, preserve, and share their scholarly output.
May12, 2009 was the historical moment in Google history, when at searchology event, launched a new searching tool named Google Squared. This is the best effort in web 3.0 and semantic searcrh. It is the effort of structuring the unstructred data on web pages and in process they extract data from the web pages and presents the search results as squares in an online spreadsheet format.The San Francisco Chronicle described the feature in a bit more detail that it compiles details from several Web pages and organizes them into a table on a single page, with multiple columns like a spread sheet. One of the features announced today is called Search Options, which is a collection of tools designed to let users better "slice and dice" their search results so they can manipulate the information they're getting. Mayer said the tools should help people who struggle with what exactly what query they should pose."
Let's say you are looking for forum discussions about a specific product, but are most interested in ones that have taken place more recently," she wrote. "That's not an easy query to formulate, but with Search Options you can search for the product's name, apply the option to filter out anything but forum sites, and then apply an option to only see results from the past week."One Search Options tool is geared toward giving users more information when they do a search. For instance, instead of just getting results in text form, they could have the search engine return images as well.Google acknowledged that this was still very much a “labs” feature that was imperfect at best. However, between Wolfram Alpha, Google’s efforts in semantic search, and a host of competitors that will be popping up in this field, we may very well be on the edge of Search 3.0. This is good news for our students, teachers, and library scientists struggling to help our students get the information they want from the billions of pages of junk (and millions of pages of interest) floating around the web. In that same layer Google also is adding more information to its results snippets -- those little pieces of text that tell you about the site that's been pulled up. If you're searching for a hotel, for example, the snippet won't just tell you the name of the hotel and where it is -- now it could tell you its price range, number of stars in customer reviews and the number of reviews listed. Google Squared still needs a lot of improvement, which is why it's being released to Labs. It collects the information by looking for structures that seem to imply facts. The squares are built out based on high probability of facts. There will be concerns over Google providing this data on its own by grabbing data and serving it up without sending searchers to the sites that provided the info.
For more reading following links may be helpful ------
Screenshots of Google Squared
Tuesday, May 12, 2009
Sunday, May 10, 2009
Wolfram Alpha- A new search engine which will left Google behind.
Mathematica has been a great success in very broadly handling all kinds of formal technical systems and knowledge.
But what about everything else? What about all other systematic knowledge? All the methods and models, and data, that exists?
Fifty years ago, when computers were young, people assumed that they’d quickly be able to handle all these kinds of things.
And that one would be able to ask a computer any factual question, and have it compute the answer.
But it didn’t work out that way. Computers have been able to do many remarkable and unexpected things. But not that.
I’d always thought, though, that eventually it should be possible. And a few years ago, I realized that I was finally in a position to try to do it.
I had two crucial ingredients: Mathematica and NKS. With Mathematica, I had a symbolic language to represent anything—as well as the algorithmic power to do any kind of computation. And with NKS, I had a paradigm for understanding how all sorts of complexity could arise from simple rules.
But what about all the actual knowledge that we as humans have accumulated?
A lot of it is now on the web—in billions of pages of text. And with search engines, we can very efficiently search for specific terms and phrases in that text.
But we can’t compute from that. And in effect, we can only answer questions that have been literally asked before. We can look things up, but we can’t figure anything new out.
So how can we deal with that? Well, some people have thought the way forward must be to somehow automatically understand the natural language that exists on the web. Perhaps getting the web semantically tagged to make that easier.
But armed with Mathematica and NKS I realized there’s another way: explicitly implement methods and models, as algorithms, and explicitly curate all data so that it is immediately computable.
It’s not easy to do this. Every different kind of method and model—and data—has its own special features and character. But with a mixture of Mathematica and NKS automation, and a lot of human experts, I’m happy to say that we’ve gotten a very long way.
But, OK. Let’s say we succeed in creating a system that knows a lot, and can figure a lot out. How can we interact with it?
The way humans normally communicate is through natural language. And when one’s dealing with the whole spectrum of knowledge, I think that’s the only realistic option for communicating with computers too.
Of course, getting computers to deal with natural language has turned out to be incredibly difficult. And for example we’re still very far away from having computers systematically understand large volumes of natural language text on the web.
But if one’s already made knowledge computable, one doesn’t need to do that kind of natural language understanding.
All one needs to be able to do is to take questions people ask in natural language, and represent them in a precise form that fits into the computations one can do.
Of course, even that has never been done in any generality. And it’s made more difficult by the fact that one doesn’t just want to handle a language like English: one also wants to be able to handle all the shorthand notations that people in every possible field use.
I wasn’t at all sure it was going to work. But I’m happy to say that with a mixture of many clever algorithms and heuristics, lots of linguistic discovery and linguistic curation, and what probably amount to some serious theoretical breakthroughs, we’re actually managing to make it work.
Pulling all of this together to create a true computational knowledge engine is a very difficult task.
It’s certainly the most complex project I’ve ever undertaken. Involving far more kinds of expertise—and more moving parts—than I’ve ever had to assemble before.
And—like Mathematica, or NKS—the project will never be finished.
But I’m happy to say that we’ve almost reached the point where we feel we can expose the first part of it.
It’s going to be a website: http://www.wolframalpha.com/. With one simple input field that gives access to a huge system, with trillions of pieces of curated data and millions of lines of algorithms.
We’re all working very hard right now to get WolframAlpha ready to go live.
I think it’s going to be pretty exciting. A new paradigm for using computers and the web.
That almost gets us to what people thought computers would be able to do 50 years ago!
Saturday, May 9, 2009
a. Seminar : Formal presentation by one or more experts in which the attendees are encouraged to discuss the subject matter.
Acc. to Carol Pierce
"Seminars tend to be more one-way from the presenter without opportunities for practice or application nor do they actively engage participants in the process."
Seminar being a bit more of a traditional training session, with the preponderance of time spent in presentation of material from the front of the room.
Seminars are usually 90 minutes to 3 hours.
Seminars are frequently more lecture driven with less participant interaction other than answering questions. Often the questions at a seminar are taken at the end of the presentation.
Seminars have more limited handouts, often just a printout of the PowerPoint presentation.
Seminar is going to involve more individual thinking, working, writing, processing and maybe some with one or two people sitting close by, but it won't be quite as active.
Seminar is a meeting on a specific subject, or a meeting of university or college students for study or discussion with an academic supervisor.
A seminar can also mean a specialized educational class.
b. Conference: A meeting for the exchange of views on a given topic OR a prearranged meeting for consultation or exchange of information or discussion (especially one with a formal agenda)
Conference also refers to meeting for lectures of discussion, whereas a seminar is a meeting on a specific subject, or a meeting of university or college students for study or discussion with an academic supervisor. Conference has no such specific meaning.
c. Symposium: A meeting or conference for discussion of a topic, especially one in which the participants form an audience and make presentations. Symposium originally referred to a drinking party (the Greek verb sympotein means "to drink together") but has since come to refer to any academic conference, or a style of university class characterized by an openly discursive rather than lecture and question-answer format
d.. Workshops: Refers to a seminar, discussion group, or the like, that emphasizes exchange of ideas and the demonstration and application of techniques, skills, etc. It emphasizes problem-solving, hands-on training, and requires the involvement of the participants.
An educational seminar or series of meetings emphasizing interaction and exchange of information among a usually small number of participants. a group of people are engaged in intensive study or work in a creative or practical field/aspects.
A workshop seems to imply relatively more time spent interactively, perhaps in facilitated activities, where the participants generate some form of product (e.g. goals for the coming year, a strategy for dealing with a customer, etc.) at the end of the session.
Workshops get participants fully involved in the learning process: small and large group discussions, activities & exercises, opportunities to practice applying the concepts that are presented.
Workshops are usually longer, often 1 to 2 days.
Workshops include far more interactive exercises.
Often the questions at a seminar are at the end of the presentation. At a workshop, handle the questions as they arise and often turn them into group discussions.
Workshops are usually smaller, 25 people or less. Seminars are often over 100 people.
Workshops usually have a workbook handout of 50-100 pages.
Workshop is more "hands-on" for the participants. They are going to be working, thinking, doing, processing, creating (maybe physically), up and at 'em moving around, lots of interaction, etc.
A workshop is quite different. It means a place where manual work is done, especially manufacturing or repairing. It also means a group working together, on a creative project, discussing a topic, or studying a subject.
A workshop is a period of time (probably 3 -6 hours up to 4 days) when the presenter provides "activity learning" opportunities balanced with time for reflection and for collaboration with peers. This is a time and place for teachers to experience for themselves (in the role of student), the learning activities that they will be utilizing to help their own students learn and grow. Two to four day workshops often have the option to be taken for college credit.
e. Training: Refers to the acquisition of knowledge, skills, and competencies as a result of the teaching of vocational or practical skills and knowledge that relates to specific useful skills.
f. Congress: A congress is very similar to a conference, however it implies a degree of formality. A congress is a formal meeting of delegates or representatives, e.g. the representatives of a group of nations, to discuss matters of interest or concern. It was also a formal way of saying "sexual intercourse", but that particular usage is rare these days.
Sunday, May 3, 2009
Edmonton, Alberta, Canada – May 2009 – Libramation, a leader in providing library
automation equipment technology and RFID to libraries, announced today the latest in
library robotic self-check technology.
- Want your patrons to be able to borrow library materials 24/7?
- The ability to do this without extending library hours or hiring extra staff?
The Libramation LibraMate, developed by NBD/Biblion is the answer! Allow patrons 24 hours access with a machine that looks and feels much like an ATM. Using a simple touch screen, patrons can browse through a list of the items available and make their selection. The library can configure the system to determine how many items a patron can borrow. The patron simply scans their patron card, standard barcode or RFID, then, the item, in a protective case, is ejected extremely fast from the machine.
LibraMate also acts as a return station; returned items are cleared with the ILS and are immediately available for the next patron. In order to make optimal use of the system, the cases come in two sizes to accommodate various materials. The cases are bigger then current products on the market, allowing approximately 95% of your collection to be circulated via the LibraMate.
In a response to a society that demands service 24 hours a day, LibraMate is a proven solution in the Netherlands. The equipment is state of the art, but easy to operate for both library staff and patrons. The LibraMate, which uses RFID technology, is especially suitable for small or new communities where library facilities are not yet feasible or are only available by bookmobile. LibraMate can be used to extend library services to shopping centers, schools, bus or subway/train stations and community centers, since the back-end can be customized to fit anywhere from 600 to 1000 items.
Dimensions (LxWxH)* 395 x 126 x 238 cm or 155" x 50" x 94"
Capacity* 600 items, Sound production negligible, Supply 380 volt, 16 amp.
* The dimensions and capacity of the LibraMate can be customized to your Library requirements.
About Libramation, a partner of the Lib~Chip Group of Companies
Libramation, celebrating their 10th anniversary, is committed to providing quality solutions for today’s library. Libramation specializes in library automation technology equipment and software. With the development of our RFID system starting in 2001, the Lib~Chip Group of Companies now have more than 125 Lib~Chip RFID installations.
Our self-check products are installed in more than 450 libraries. Libraries can choose from numerous hardware and software options, custom animations to interact with patrons and a variety of languages. Libramation understands that each customer has unique requirements. Coupled with our expertise and knowledge, we utilize technologies specifically designed to automate library workflow processes. Libramation consults with the library and makes recommendations that help the library address their distinct needs.
Libramation products include patron self-check stations, ergonomic circulation desks, automated materials-handling systems, CD/DVD 24-hour self-charge and return "Media Bank" kiosks, RFID technology and the new LibraMate.
Copyright (c) 2009 Libramation
Summary:Libramation, announced the latest in library robotic self-check technology. The Libramation LibraMate, allows patrons 24 hours access with a machine that looks and feels much like an ATM. Using a simple touch screen, patrons can browse through a list of the items available and make their selection. The library can configure the system to determine how many items a patron can borrow. The patron simply scans their patron card, standard barcode or RFID, then, the item, in a protective case, is ejected extremely fast from the machine.