Search This Blog

Thursday, August 21, 2025

Daily Reading

 Sustainable development goal

The 2030 Agenda for Sustainable Development, adopted by all United Nations Member States in 2015, provides a shared blueprint for peace and prosperity for people and the planet, now and into the future. At its heart are the 17 Sustainable Development Goals (SDGs), which are an urgent call for action by all countries - developed and developing - in a global partnership. They recognize that ending poverty and other deprivations must go hand-in-hand with strategies that improve health and education, reduce inequality, and spur economic growth – all while tackling climate change and working to preserve our oceans and forests.

The SDGs build on decades of work by countries and the UN, including the UN Department of Economic and Social Affairs

Today, the Division for Sustainable Development Goals (DSDG) in the United Nations Department of Economic and Social Affairs (UNDESA) provides substantive support and capacity-building for the SDGs and their related thematic issues, including waterenergyclimateoceansurbanizationtransportscience and technology, the Global Sustainable Development Report (GSDR)partnerships and Small Island Developing States. DSDG plays a key role in the evaluation of UN systemwide implementation of the 2030 Agenda and on advocacy and outreach activities relating to the SDGs. In order to make the 2030 Agenda a reality, broad ownership of the SDGs must translate into a strong commitment by all stakeholders to implement the global goals. DSDG aims to help facilitate this engagement.

Follow DSDG on Facebook at www.facebook.com/sustdev and on X at @SustDev.

Implementation Progress

Every year, the UN Secretary General presents an annual SDG Progress report, which is developed in cooperation with the UN System, and based on the global indicator framework and data produced by national statistical systems and information collected at the regional level.

Additionally, the Global Sustainable Development Report is produced once every four years to inform the quadrennial SDG review deliberations at the General Assembly. It is written by an Independent Group of Scientists appointed by the Secretary-General.


Wednesday, August 20, 2025

Daily Reading

 Altmetrics

In scholarly and scientific publishing, altmetrics (stands for "alternative metrics") are non-traditional bibliometrics proposed as an alternative or complement to more traditional citation impact metrics, such as impact factor and h-index. The term altmetrics was proposed in 2010, as a generalization of article level metrics, and has its roots in the #altmetrics hashtag. Although altmetrics are often thought of as metrics about articles, they can be applied to people, journals, books, data sets, presentations, videos, source code repositories, web pages, etc.

Altmetrics use public APIs across platforms to gather data with open scripts and algorithms. Altmetrics did not originally cover citation counts, but calculate scholar impact based on diverse online research output, such as social media, online news media, online reference managers and so on. It demonstrates both the impact and the detailed composition of the impact. Altmetrics could be applied to research filter, promotion and tenure dossiers, grant applications and for ranking newly-published articles in academic search engines.

Over time, the diversity of sources mentioning, citing, or archiving articles has gone down. This happened because services ceased to exist, like Connotea, or because changes in API availability. For example, PlumX removed Twitter metrics in August 2023.

Adoption

The development of web 2.0 has changed the research publication seeking and sharing within or outside the academy, but also provides new innovative constructs to measure the broad scientific impact of scholar work. Although the traditional metrics are useful, they might be insufficient to measure immediate and uncited impacts, especially outside the peer-review realm.

Projects such as ImpactStory, and various companies, including Altmetric, Plum Analytics and Overton are calculating altmetrics. Several publishers have started providing such information to readers, including BioMed Central, Public Library of Science (PLOS),[21][22] Frontiers,[23] Nature Publishing Group,[24] and Elsevier.[25][26] The NIHR Journals Library also includes altmetric data alongside its publications.[27]

In 2008, the Journal of Medical Internet Research started to systematically collect tweets about its articles. Starting in March 2009, the Public Library of Science also introduced article-level metrics for all articles. Funders have started showing interest in alternative metrics, including the UK Medical Research Council. Altmetrics have been used in applications for promotion review by researchers. Furthermore, several universities, including the University of Pittsburgh are experimenting with altmetrics at an institute level.

However, it is also observed that an article needs little attention to jump to the upper quartile of ranked papers, suggesting that not enough sources of altmetrics are currently available to give a balanced picture of impact for the majority of papers.

Important in determining the relative impact of a paper, a service that calculates altmetrics statistics needs a considerably sized knowledge base. The following table shows the number of artefacts, including papers, covered by services:

Website Number of artefacts
Plum Analytics ~ 52.6 million
Altmetric.com ~ 28 million
ImpactStory ~ 1 million
Overton ~ 11 million

Categories

Altmetrics are a very broad group of metrics, capturing various parts of impact a paper or work can have. A classification of altmetrics was proposed by ImpactStory in September 2012, and a very similar classification is used by the Public Library of Science:

  • Viewed – HTML views and PDF downloads
  • Discussed – journal comments, science blogs, Wikipedia, Twitter, Facebook and other social media
  • Saved – Mendeley, CiteULike and other social bookmarks
  • Cited – citations in the scholarly literature, tracked by Web of Science, Scopus, CrossRef and others
  • Recommended – for example used by F1000Prime

Viewed

One of the first alternative metrics to be used was the number of views of a paper. Traditionally, an author would wish to publish in a journal with a high subscription rate, so many people would have access to the research. With the introduction of web technologies it became possible to actually count how often a single paper was looked at. Typically, publishers count the number of HTML views and PDF views. As early as 2004, the BMJ published the number of views for its articles, which was found to be somewhat correlated to citations.

Discussed

The discussion of a paper can be seen as a metric that captures the potential impact of a paper. Typical sources of data to calculate this metric include Facebook, Google+, Twitter, Science Blogs, and Wikipedia pages. Some researchers regard the mentions on social media as citations. For example, citations on a social media platform could be divided into two categories: internal and external. For instance, the former includes retweets, the latter refers to tweets containing links to outside documents.[42] The correlation between the mentions and likes and citation by primary scientific literature has been studied, and a slight correlation at best was found, e.g. for articles in PubMed.

In 2008 the Journal of Medical Internet Research began publishing views and tweets. These "tweetations" proved to be a good indicator of highly cited articles, leading the author to propose a "Twimpact factor", which is the number of Tweets it receives in the first seven days of publication, as well as a Twindex, which is the rank percentile of an article's Twimpact factor. However, if implementing use of the Twimpact factor, research shows scores to be highly subject specific, and as a result, comparisons of Twimpact factors should be made between papers of the same subject area. While past research in the literature has demonstrated a correlation between tweetations and citations, it is not a causative relationship. At this point in time, it is unclear whether higher citations occur as a result of greater media attention via Twitter and other platforms, or is simply reflective of the quality of the article itself.

Recent research conducted at the individual level, rather than the article level, supports the use of Twitter and social media platforms as a mechanism for increasing impact value. Results indicate that researchers whose work is mentioned on Twitter have significantly higher h-indices than those of researchers whose work was not mentioned on Twitter. The study highlights the role of using discussion based platforms, such as Twitter, in order to increase the value of traditional impact metrics.

Besides Twitter and other streams, blogging has shown to be a powerful platform to discuss literature. Various platforms exist that keep track of which papers are being blogged about. Altmetric.com uses this information for calculating metrics, while other tools just report where discussion is happening, such as ResearchBlogging and Chemical blogspace.

Platforms may even provide a formal way of ranking papers or recommending papers otherwise, such as Faculty of 1000.

Saved

It is also informative to quantify the number of times a page has been saved, or bookmarked. It is thought that individuals typically choose to bookmark pages that have a high relevance to their own work, and as a result, bookmarks may be an additional indicator of impact for a specific study. Providers of such information include science specific social bookmarking services such as CiteULike and Mendeley.

Cited

The cited category is a narrowed definition, different from the discussion. Besides the traditional metrics based on citations in scientific literature, such as those obtained from Google Scholar, CrossRef, PubMed Central, and Scopus, altmetrics also adopt citations in secondary knowledge sources. For example, ImpactStory counts the number of times a paper has been referenced by Wikipedia. Plum Analytics also provides metrics for various academic publications, seeking to track research productivity. PLOS is also a tool that may be used to utilize information on engagement.

Numerous studies have shown that scientific articles disseminated through social media channels (i.e. Twitter, Reddit, Facebook, YouTube, etc) have substantially higher biblometric scores (downlodas, reads and citations) than articles not advertised through social media. In the fields of plastic surgery, hand surgery and more, higher Altmetric scores are associated with better short-term bibliometrics.

Interpretation

While there is less consensus on the validity and consistency of altmetrics, the interpretation of altmetrics in particular is discussed. Proponents of altmetrics make clear that many of the metrics show attention or engagement, rather than the quality of impacts on the progress of science. Even citation-based metrics do not indicate if a high score implies a positive impact on science; that is, papers are also cited in papers that disagree with the cited paper, an issue for example addressed by the Citation Typing Ontology project.

Altmetrics could be more appropriately interpreted by providing detailed context and qualitative data. For example, in order to evaluate the scientific contribution of a scholar work to policy making by altmetrics, qualitative data, such as who's citing online and to what extent the online citation is relevant to the policymaking, should be provided as evidence.

Regarding the relatively low correlation between traditional metrics and altmetrics, altmetrics might measure complementary perspectives of the scholar impact. It is reasonable to combine and compare the two types of metrics in interpreting the societal and scientific impacts. Researchers built a 2*2 framework based on the interactions between altmetrics and traditional citations. Further explanations should be provided for the two groups with high altmetrics/low citations and low altmetrics/high citations. Thus, altmetrics provide convenient approaches for researchers and institutions to monitor the impact of their work and avoid inappropriate interpretations.

Controversy

The usefulness of metrics for estimating scientific impact is controversial. Research has found that online buzz could amplify the effect of other forms of outreach on researchers' scientific impact. For the nano-scientists that are mentioned on Twitter, their interactions with reporters and non-scientists positively and significantly predicted higher h-index, whereas the non-mentioned group failed. Altmetrics expands the measurement of scholar impact for containing a rapid uptake, a broader range of audiences and diverse research outputs. In addition, the community shows a clear need: funders demand measurables on the impact of their spending, such as public engagement.

However, there are limitations that affect the usefulness due to technique problems and systematic bias of construct, such as data quality, heterogeneity and particular dependencies. In terms of technique problems, the data might be incomplete, because it is difficult to collect those online research outputs without direct links to their mentions (i.e. videos) and identify different versions of one research work. Additionally, whether the API leads to any missing data is unsolved.

As for systematic bias, like other metrics, altmetrics are prone to self-citation, gaming, and other mechanisms to boost one's apparent impact such as performing citation spam in Wikipedia. Altmetrics can be gamed: for example, likes and mentions can be bought. Altmetrics can be more difficult to standardize than citations. One example is the number of tweets linking to a paper where the number can vary widely depending on how the tweets are collected. Besides, online popularity may not equal to scientific values. Some popular online citations might be far from the value of generating further research discoveries, while some theoretical-driven or minority-targeted research of great science-related importance might be marginalized online. For example, the top tweeted articles in biomedicine in 2011 were relevant to curious or funny content, potential health applications, and catastrophe. Altmetric state that they have systems in place to detect, identify and correct gaming. Finally, recent research has shown Altmetrics reproduce gendered biases found in disciplinary publication and citation practices: for example, journal articles authored exclusively by female scholars score 27% lower on average than exclusively male-authored outputs. At once, this same research shows 0 attention scores are more likely for male-authored articles.

Altmetrics for more recent articles may be higher because of the increasing uptake of the social web and because articles may be mentioned mainly when they are published. As a result, it might not be fair to compare the altmetrics scores of articles unless they have been published at a similar time. Researchers has developed a sign test to avoid the usage uptake bias by comparing the metrics of an article with the two articles published immediately before and after it.

It should be kept in mind that the metrics are only one of the outcomes of tracking how research is disseminated and used. Altmetrics should be carefully interpreted to overcome the bias. Even more informative than knowing how often a paper is cited, is which papers are citing it. That information allows researchers to see how their work is impacting the field (or not). Providers of metrics also typically provide access to the information from which the metrics were calculated. For example, Web of Science shows which are the citing papers, ImpactStory shows which Wikipedia pages are referencing the paper, and CitedIn shows which databases extracted data from the paper.

Another concern of altmetrics, or any metrics, is how universities or institutions are using metrics to rank their employees make promotion or funding decisions, and the aim should be limited to measure engagement.

The overall online research output is very little and varied among different disciplines. The phenomenon might be consistent with the social media use among scientists. Surveys has shown that nearly half of their respondents held ambivalent attitudes of social media's influence on academic impact and never announced their research work on social media. With the changing shift in open science and social media use, the consistent altmetrics across disciplines and institutions will more likely be adopted.

Ongoing research

The specific use cases and characteristics is an active research field in bibliometrics, providing much needed data to measure the impact of altmetrics itself. Public Library of Science has an Altmetrics Collection and both the Information Standards Quarterly and the Aslib Journal of Information Management recently published special issues on altmetrics. A series of articles that extensively reviews altmetrics was published in late 2015.[68][69][70]

There is other research examining the validity of one altmetrics[4][28] or make comparisons across different platforms.[60] Researchers examine the correlation between altmetrics and traditional citations as the validity test. They assume that the positive and significant correlation reveals the accuracy of altmetrics to measure scientific impact as citations.[60] The low correlation (less than 0.30[4]) leads to the conclusion that altmetrics serves a complementary role in scholar impact measurement such as the study by Lamba (2020) [71] who examined 2343 articles having both altmetric attention scores and citations published by 22 core health care policy faculty members at Harvard Medical School and a significant strong positive correlation (r>0.4) was observed between the aggregated ranked altmetric attention scores and ranked citation/increased citation values for all the faculty members in the study. However, it remains unsolved that what altmetrics are most valuable and what degree of correlation between two metrics generates a stronger impact on the measurement. Additionally, the validity test itself faces some technical problems as well. For example, replication of the data collection is impossible because of the instant changing algorithms of data providers.[72]

Daily reading

 

Browsing the library remotely: Virtual Shelf Browse

Daily Reading

  Academic journal publishing reform

    Academic journal publishing reform is the advocacy for changes in the way academic journals are created and distributed in the age of the Internet and the advent of electronic publishing. Since the rise of the Internet, people have organized campaigns to change the relationships among and between academic authors, their traditional distributors and their readership. Most of the discussion has centered on taking advantage of benefits offered by the Internet's capacity for widespread distribution of reading material. 

Before the advent of the Internet, it was difficult for scholars to distribute articles giving their research results. Historically publishers performed services including proofreading, typesetting, copy editing, printing, and worldwide distribution. In modern times all researchers became expected to give the publishers digital copies of their work which needed no further processing.[1] For digital distribution printing was unnecessary, copying was free, and worldwide distribution happens online instantly. In science journal publishing, Internet technology enabled the Big Five major scientific publishers—Elsevier, Springer, Wiley, Taylor and Francis and American Chemical Society—to cut their expenditures such that they could consistently generate profits of over 35% per year. In 2017 these five published 56% of all journal articles. The remaining 44% were published by over 200 small publishers.

The Internet made it easier for researchers to do work which had previously been done by publishers, and some people began to feel that they did not need to pay for the services of publishers. This perception was a problem for publishers, who stated that their services were still necessary at the rates they asked. Critics began to describe publishers' practices with terms such as "corporate scam" and "racket". Scholars sometimes obtain articles from fellow scholars through unofficial channels, such as posting requests on Twitter using the hashtag "#icanhazpdf" (a play on the I Can Has Cheezburger? meme), to avoid paying publishers' access charges. In 2004, there were reports in British media of a "revolution in academic publishing" which would make research freely available online but many scientists continued to publish their work in the traditional big name journals like Nature.

For a short time in 2012, the name Academic Spring, inspired by the Arab Spring, was used to indicate movements by academics, researchers, and scholars opposing the restrictive copyright and circulation of traditional academic journals and promoting free access online instead. The barriers to free access for recent scientific research became a hot topic in 2012, after a blog post by mathematician Timothy Gowers went viral in January. According to the Financial Times, the movement was named by Dennis Johnson of Melville House Publishing, though scientist Mike Taylor has suggested the name came from The Economist.

Mike Taylor argued that the Academic Spring may have some unexpected results beyond the obvious benefits. Referring to work by the biophysicist Cameron Neylon, he says that, because modern science is now more dependent on well-functioning networks than individuals, making information freely available may help computer-based analyses to provide opportunities for major scientific breakthroughs.[Government and university officials have welcomed the prospect of saving on subscriptions[citation needed] which have been rising in cost, while universities' budgets have been shrinking. Mark Walport, the director of Wellcome Trust, has indicated that science sponsors do not mind having to fund publication in addition to the research. Not everyone has been supportive of the movement, with scientific publisher Kent Anderson calling it "shallow rhetoric aimed at the wrong target.


Motivations for reform

Although it has some historical precedent, open access became desired in response to the advent of electronic publishing as part of a broader desire for academic journal publishing reform. Electronic publishing created new benefits as compared to paper publishing but beyond that, it contributed to causing problems in traditional publishing models.

The premises behind open access are that there are viable funding models to maintain traditional academic publishing standards of quality while also making the following changes to the field:

  1. Rather than making journals be available through a subscription business model, all academic publications should be free to read and published with some other funding model. Publications should be gratis or "free to read".
  2. Rather than applying traditional notions of copyright to academic publications, readers should be free to build upon the research of others. Publications should be libre or "free to build upon".
  3. Everyone should have greater awareness of the serious social problems caused by restricting access to academic research.
  4. Everyone should recognize that there are serious economic challenges for the future of academic publishing. Even though open access models are problematic, traditional publishing models definitely are not sustainable and something radical needs to change immediately.

Open access also has ambitions beyond merely granting access to academic publications, as access to research is only a tool for helping people achieve other goals. Open access advances scholarly pursuits in the fields of open data, open government, open educational resources, free and open-source software, and open science, among others.

Problems addressed by academic publishing reform

The motivations for academic journal publishing reform include the ability of computers to store large amounts of information, the advantages of giving more researchers access to preprints, and the potential for interactivity between researchers.

Various studies showed that the demand for open access research was such that freely available articles consistently had impact factors which were higher than articles published under restricted access.

Some universities reported that modern "package deal" subscriptions were too costly for them to maintain, and that they would prefer to subscribe to journals individually to save money.

The problems which led to discussion about academic publishing reform have been considered in the context of what provision of open access might provide. Here are some of the problems in academic publishing which open access advocates purport that open access would address:

  1. A pricing crisis called the serials crisis has been growing in the decades before open access and remains today. The academic publishing industry has increased prices of academic journals faster than inflation and beyond the library budgets.
  2. The pricing crisis does not only mean strain to budgets, but also that researchers actually are losing access to journals.
  3. Not even the wealthiest libraries in the world are able to afford all the journals that their users are demanding, and less rich libraries are severely harmed by lack of access to journals.
  4. Publishers are using "bundling" strategies to sell journals, and this marketing strategy is criticized by many libraries as forcing them to pay for unpopular journals which their users are not demanding, while squeezing out of library budgets smaller publishers, who cannot offer bundled subscriptions.
  5. Libraries are cutting their book budgets to pay for academic journals.
  6. Libraries do not own electronic journals in permanent archival form as they do paper copies, so if they have to cancel a subscription then they lose all subscribed journals. This did not happen with paper journals, and yet costs historically have been higher for electronic versions.
  7. Academic publishers get essential assets from their subscribers in a way that other publishers do not. Authors donate the texts of academic journals to the publishers and grant rights to publish them, and editors and referees donate peer-review to validate the articles. The people writing the journals are questioning the increased pressure put upon them to pay higher prices for the journal produced by their community.
  8. Conventional publishers are using a business model which requires access barriers and creates artificial scarcity. All publishers need revenue, but open access promises models in which scarcity is fundamental to raising revenue.
  9. Scholarly publishing depends heavily on government policy, public subsidies, gift economy, and anti-competitive practices, yet all of these things are in conflict with the conventional academic publishing model of restricting access to works.[16]
  10. Toll access journals compete more for authors to donate content to them than they compete for subscribers to pay for the work. This is because every scholarly journal has a natural monopoly over the information of its field. Because of this, the market for pricing journals does not have feedback because it is outside of traditional market forces, and the prices have no control to drive it to serve the needs of the market.[16]
  11. Besides the natural monopoly, there is supporting evidence that prices are artificially inflated to benefit publishers while harming the market. Evidence includes the trend of large publishers to have accelerating prices increases greater than small publishers, when in traditional markets high volume and high sales enables cost savings and lower prices.
  12. Conventional publishers fund "content protection" actions which restrict and police content sharing.[16]
  13. For-profit publishers have economic incentives to decrease rates of rejected articles so that they publish more content to sell. No such market force exists if selling content for money is not a motivating factor.[16]
  14. Many researchers are unaware that it might be possible for them to have all the research articles they need, and just accept it as fate that they will always be without some of the articles they would like to read.[16]
  15. Access to toll-access journals is not scaling with increases in research and publishing, and the academic publishers are under market forces to restrict increases in publishing and indirectly because of that they are restricting the growth of research.[16]

Motivations against reform

Publishers state that if profit was not a consideration in the pricing of journals then the cost of accessing those journals would not substantially change.[22] Publishers also state that they add value to publications in many ways, and without academic publishing as an institution the readership would miss these services and fewer people would have access to articles.[22]

Critics of open access have suggested that by itself, this is not a solution to scientific publishing's most serious problem – it simply changes the paths through which ever-increasing sums of money flow.[23] Evidence for this does exist and for example, Yale University ended its financial support of BioMed Central's Open Access Membership program effective July 27, 2007. In their announcement, they stated,

The libraries’ BioMedCentral membership represented an opportunity to test the technical feasibility and the business model of this open access publisher. While the technology proved acceptable, the business model failed to provide a viable long-term revenue base built upon logical and scalable options. Instead, BioMedCentral has asked libraries for larger and larger contributions to subsidize their activities. Starting with 2005, BioMed Central article charges cost the libraries $4,658, comparable to single biomedicine journal subscription. The cost of article charges for 2006 then jumped to $31,625. The article charges have continued to soar in 2007 with the libraries charged $29,635 through June 2007, with $34,965 in potential additional article charges in submission.[24]

A similar situation is reported from the University of Maryland, and Phil Davis commented that,

The assumptions that open access publishing is both cheaper and more sustainable than the traditional subscription model are featured in many of these mandates. But they remain just that — assumptions. In reality, the data from Cornell[25] show just the opposite. Institutions like the University of Maryland would pay much more under an author-pays model, as would most research-intensive universities, and the rise in author processing charges (APCs) rivals the inflation felt at any time under the subscription model.[26]

Opponents of the open access model see publishers as a part of the scholarly information chain and view a pay-for-access model as being necessary in ensuring that publishers are adequately compensated for their work. "In fact, most STM [Scientific, Technical and Medical] publishers are not profit-seeking corporations from outside the scholarly community, but rather learned societies and other non-profit entities, many of which rely on income from journal subscriptions to support their conferences, member services, and scholarly endeavors".[27] Scholarly journal publishers that support pay-for-access claim that the "gatekeeper" role they play, maintaining a scholarly reputation, arranging for peer review, and editing and indexing articles, require economic resources that are not supplied under an open access model. Conventional journal publishers may also lose customers to open access publishers who compete with them. The Partnership for Research Integrity in Science and Medicine (PRISM), a lobbying organization formed by the Association of American Publishers (AAP), is opposed to the open access movement.[28] PRISM and AAP have lobbied against the increasing trend amongst funding organizations to require open publication, describing it as "government interference" and a threat to peer review.[29]

For researchers, publishing an article in a reputable scientific journal is perceived as being beneficial to one's reputation among scientific peers and in advancing one's academic career. There is a concern that the perception of open access journals do not have the same reputation, which will lead to less publishing.[30] Park and Qin discuss the perceptions that academics have with regard to open access journals. One concern that academics have "are growing concerns about how to promote [Open Access] publishing." Park and Qin also state, "The general perception is that [Open Access] journals are new, and therefore many uncertainties, such as quality and sustainability, exist."

Journal article authors are generally not directly financially compensated for their work beyond their institutional salaries and the indirect benefits that an enhanced reputation provides in terms of institutional funding, job offers, and peer collaboration.[31]

There are those, for example PRISM, who think that open access is unnecessary or even harmful. David Goodman argued that there is no need for those outside major academic institutions to have access to primary publications, at least in some fields.[32]

The argument that publicly funded research should be made openly available has been countered with the assertion that "taxes are generally not paid so that taxpayers can access research results, but rather so that society can benefit from the results of that research; in the form of new medical treatments, for example. Publishers claim that 90% of potential readers can access 90% of all available content through national or research libraries, and while this may not be as easy as accessing an article online directly it is certainly possible."[33] The argument for tax-payer funded research is only applicable in certain countries as well. For instance in Australia, 80% of research funding comes through taxes, whereas in Japan and Switzerland, only approximately 10% is from the public coffers.[33]

For various reasons open access journals have been established by predatory publishers who seek to use the model to make money without regard to producing a quality journal. The causes of predatory open access publishing include the low barrier to creating the appearance of a legitimate digital journal and funding models which may include author publishing costs rather than subscription sales. University librarian Jeffrey Beall published a "List of Predatory Publishers" and an accompanying methodology for identifying publishers who have editorial and financial practices which are contrary to the ideal of good research publishing practices.[34][35]

Reform initiatives

Public Library of Science

The Public Library of Science is a nonprofit open-access scientific publishing project aimed at creating a library of open access journals and other scientific literature under an open content license. The founding of the organization had its origins in a 2001 online petition calling for all scientists to pledge that from September 2001 they would discontinue submission of papers to journals which did not make the full-text of their papers available to all, free and unfettered, either immediately or after a delay of several months. The petition collected 34,000 signatures but the publishers took no strong response to the demands. Shortly thereafter, the Public Library of Science was founded as an alternative to traditional publishing.

HINARI

HINARI is a 2002 project of the World Health Organization and major publishers to enable developing countries to access collections of biomedical and health literature online at reduced subscription costs.

Research Works Act

The Research Works Act was a bill of the United States Congress which would have prohibited all laws which would require an open access mandate when US-government-funded researchers published their work. The proposers of the law stated that it would "ensure the continued publication and integrity of peer-reviewed research works by the private sector. This followed other similar proposed measures such as the Fair Copyright in Research Works Act. These attempts to limit free access to such material are controversial and have provoked lobbying for and against by numerous interested parties such as the Association of American Publishers and the American Library Association.[39] Critics of the law stated that it was the moment that "academic publishers gave up all pretence of being on the side of scientists." In February 2012, Elsevier withdrew its support for the bill. Following this statement, the sponsors of the bill announced they will also withdraw their support.

The Cost of Knowledge

In January 2012, Cambridge mathematician Timothy Gowers, started a boycott of journals published by Elsevier, in part a reaction to their support for the Research Works Act. In response to an angry blog post by Gowers, the website The Cost of Knowledge was launched by a sympathetic reader. An online petition called The Cost of Knowledge was set up by fellow mathematician Tyler Neylon, to gather support for the boycott. By early April 2012, it had been signed by over eight thousand academics. As of mid-June 2012, the number of signatories exceeded 12,000.

Access2Research

In May 2012, a group of open-access activists formed the Access2Research initiative that went on to launch a petition to the White House to "require free access over the Internet to journal articles arising from taxpayer-funded research". The petition was signed by over 25,000 people within two weeks, which entitled it to an official response from the White House.

PeerJ

PeerJ is an open-access journal launched in 2012 that charges publication fees per researcher, not per article, resulting in what has been called "a flat fee for 'all you can publish.

Public Knowledge Project

Since 1998, PKP has been developing free open source software platforms for managing and publishing peer-reviewed open access journals and monographs, with Open Journal Systems used by more than 7,000 active journals in 2013.

Schekman boycott

2013 Nobel Prize winner Randy Schekman called for a boycott of traditional academic journals including Nature, Cell, and Science. Instead he promoted the open access journal eLife.

Initiative for Open Citations

Initiative for Open Citations is a CrossRef initiative for improved citation analysis. It was supported by majority of the publishers effective from April 2017.

Diamond Open Access journals

Diamond open access journals have been promoted as the ultimate solution to serials crisis. Under this model, neither the authors nor the readers pay for access or publication, and the resources required to run the journal are provided by scientists on a voluntary basis, by governments or by philanthropic grants. Although the Diamond OA model turned out to be successful in creating a large number of journals, the percentage of publications in such journals remains low, possibly due to a low prestige of these new journals and due to concerns with their long-term viability.

 

Tuesday, August 19, 2025

Monday, August 18, 2025

Daily Reading

  Journal Citation Reports

Journal Citation Reports (or JCR) is a product of Clarivate Analytics and is an authoritative resource for impact factor data. This database provides impact factors and rankings of many journals in the social and life sciences based on millions of citations.  It offers numerous sorting options including impact factor, total cites, total articles, and immediacy index.  In addition, JCR provides a five-year impact factor and visualized trend data. 

further reading: youtube: https://www.youtube.com/watch?v=nvMJlBi_i_I

Daily Reading

 i10-Index

Created by Google Scholar and used in Google's My Citations feature. 

i10-Index = the number of publications with at least 10 citations.  

This very simple measure is only used by Google Scholar, and is another way to help gauge the productivity of a scholar.  

Advantages of i10-Index

  • Very simple and straightforward to calculate
  • My Citations in Google Scholar is free and easy to use

Disadvantages of i10-Index

  • Used only in Google Scholar

Here is a screenshot of a Google Scholar My Citations page for Charles Darwin (you can see the i10-Index highlighted in the small table):

The i10 index for Charles Darwin

 


Daily Reading

 H-Index

The Web of Science uses the H-Index to quantify research output by measuring author productivity and impact.

H-Index = number of papers (h) with a citation number ≥ h.  

Example: a scientist with an H-Index of 37 has 37 papers cited at least 37 times.  

Advantages of the H-Index:

  • Allows for direct comparisons within disciplines
  • Measures quantity and impact by a single value.

Disadvantages of the H-Index:

  • Does not give an accurate measure for early-career researchers
  • Calculated by using only articles that are indexed in Web of Science.  If a researcher publishes an article in a journal that is not indexed by Web of Science, the article as well as any citations to it will not be included in the H-Index calculation.

Tools for measuring H-Index:

  • Web of Science
  • Google Scholar

This short clip helps to explain the limitations of the H-Index for early-career scientists:

further reading: https://guides.library.cornell.edu/c.php?g=32272&p=203391 

https://www.youtube.com/watch?v=jeR05trDeNo

https://www.youtube.com/shorts/gdduiHTU2Oc


 

Daily Reading

 G-index

 The G-index was proposed by Leo Egghe in his paper "Theory and Practice of the G-Index" in 2006 as an improvement on the H-Index. 

G-Index is calculated this way: "Given a set of articles'' ranked in decreasing order of the number of citations that they received, the G-Index is the (unique) largest number such that the top g articles received (together) at least g^2 citations." (from Harzig's Publish or Perish Manual)

Advantages of the G-Index:

  • Accounts for the performance of author's top articles.
  • Helps to make more apparent the difference between authors' respective. impacts.  The inflated values of the G-Index help to give credit to lowly-cited or non-cited papers while giving credit for highly-cited papers.  

Disadvantages of the G-Index:

Introduced in 2006. and debate continues whether G-Index is superior to H-Index.  Might not be as widely accepted as H-Index. 
further reading: https://guides.library.cornell.edu/c.php?g=32272&p=203392

Daily Reading

 LIB-Vahini

Empowering LIS professionals through connection, ......growth....

LIB-VAHINI serves as a dedicated digital space for professionals to showcase their profiles, achievements, and contributions. By offering visibility, career development opportunities, and networking avenues, the portal aims to elevate the professional identity of LIS practitioners and promote their pivotal role in knowledge management. The platform also acts as a national-level repository of LIS talent-facilitating collaborations, mentorship, and institutional partnerships.

At its core, LIS is about building a vibrant, inclusive community where professionals can interact, share insights, and inspire the next generation. We celebrate the dedication, innovation, and impact of LIS professionals, ensuring their work continues to shape informed and connected societies. 

further reading: https://libvahini.inflibnet.ac.in/#about_lib

Wednesday, August 13, 2025

Daily Reading

 The g-index is an author-level metric suggested in 2006 by Leo Egghe. The index is calculated based on the distribution of citations received by a given researcher's publications, such that given a set of articles ranked in decreasing order of the number of citations that they received, the g-index is the unique largest number such that the top g articles received together at least g2 citations. Hence, a g-index of 10 indicates that the top 10 publications of an author have been cited at least 100 times (102), a g-index of 20 indicates that the top 20 publications of an author have been cited 400 times (202).

It can be equivalently defined as the largest number n of highly cited articles for which the average number of citations is at least n. This is in fact a rewriting of the definition

An example of a g-index (the raw citation data, plotted with stars, allows the h-index to also be extracted for comparison).

as

The g-index is an alternative for the older h-index. The h-index does not average the number of citations. Instead, the h-index only requires a minimum of n citations for the least-cited article in the set and thus ignores the citation count of very highly cited publications. Roughly, the effect is that h is the number of works of a quality threshold that rises as h rises; g allows citations from higher-cited works to be used to bolster lower-cited works in meeting this threshold. In effect, the g-index is the maximum reachable value of the h-index if a fixed number of citations can be distributed freely over a fixed number of publications. Therefore, in all cases g is at least h, and is in most cases higher. The g-index often separates authors based on citations to a greater extent compared to the h-index. However, unlike the h-index, the g-index saturates whenever the average number of citations for all publications exceeds the total number of publications; the way it is defined, the g-index is not adapted to this situation. However, if an author with a saturated g-index publishes more, their g-index will increase.

An example of two authors who both have 10 publications, both authors have a h-index of 6. However, Author 1 has a g-index of 10, while Author 2 has a g-index of 7.
Author 1Author 2
Work 13010
Work 2179
Work 3159
Work 4139
Work 588
Work 666
Work 755
Work 844
Work 932
Work 1011
Total cites10263
Average cites10,26,3

The g-index has been characterized in terms of three natural axioms by Woeginger (2008).[2] The simplest of these three axioms states that by moving citations from weaker articles to stronger articles, one's research index should not decrease. Like the h-index, the g-index is a natural number and thus lacks in discriminatory power. Therefore, Tol (2008) proposed a rational generalisation.

Tol also proposed a collective g-index.

Given a set of researchers ranked in decreasing order of their g-index, the g1-index is the (unique) largest number such that the top g1 researchers have on average at least a g-index of g1.

Daily Quiz

 Q.The term for patents in India is years Ans.20 Q. The Indian Patents and Designs Act 1911 came into force in.................. Ans. 1912 Q...