Wednesday charm








We hope to see you again…
Wednesday charm








We hope to see you again…
There is a watch on at this time for a mass ejection.
The strength of magnetic storms are rated G1-G5 (NOAH)
Look up,enjoy often as magnetic activity could be strong into 2026.

A moderate ejection in May has a major watch on through today at 8 PM EDT.
Harvard anyone?

Navigating University Rankings: A Comprehensive Guide to Global and Regional Assessments
I. Executive Summary
The landscape of higher education is increasingly shaped by various university ranking systems, which serve as influential barometers of institutional performance globally. These rankings, compiled by prominent organizations such as Quacquarelli Symonds (QS), Times Higher Education (THE), and the Academic Ranking of World Universities (ARWU), offer comparative assessments that guide a diverse array of stakeholders, including prospective students, parents, educators, and policymakers. While ostensibly providing clarity and benchmarking, these systems operate with distinct methodologies, leading to varied outcomes and highlighting the importance of understanding their underlying criteria.
This report delves into the intricate world of university rankings, clarifying their geographical scope from global to national and provincial levels. It meticulously examines the methodologies of major global ranking bodies, detailing their indicators and assessment foci. A particular emphasis is placed on a Canadian case study, illustrating how these global frameworks translate into national and regional assessments, with specific data points for leading Canadian and British Columbia universities. Furthermore, the report critically analyzes the inherent criticisms and limitations of current ranking paradigms, addressing concerns regarding methodological biases, data integrity, and the profound impact on institutional behavior and academic priorities. Finally, it explores evolving trends and alternative assessment approaches that prioritize teaching quality, student experience, social responsibility, and open science, advocating for a more holistic and nuanced understanding of university quality for informed decision-making in higher education.
II. Introduction to University Rankings
University rankings are structured comparative evaluations of higher education institutions, designed to provide a snapshot of their performance across a spectrum of criteria. Their primary purpose extends beyond mere comparison, serving as critical tools for guiding prospective students in their academic pursuits, enabling institutions to benchmark their performance against peers, informing governmental policies on education, and influencing the allocation of vital funding and resources. For instance, QS explicitly states its rankings are designed to assess key aspects of a university’s mission, encompassing teaching, research, the nurturing of employability, and internationalization. Similarly, THE evaluates universities across five core mission categories: Teaching, Research Environment, Research Quality, International Outlook, and Industry Income.
The user’s inquiry regarding “counties rankings of university’s” points to a common desire for geographically specific information. It is important to clarify that university ranking systems typically operate at broader geographical scales than individual counties, focusing instead on global, national, or sub-national divisions such as provinces or regions. This approach is necessitated by the extensive scope of data collection required and the operational scale of most higher education institutions. Global rankings, exemplified by QS, THE, and ARWU, offer a worldwide comparative perspective, while national rankings, such as Maclean’s in Canada, provide detailed country-specific insights. Provincial or regional breakdowns are often derived from these larger-scale rankings or are a focus of specialized national publications.
University rankings are not merely passive descriptive tools; they function as active mechanisms that profoundly influence institutional behavior and national policy. Governments, for example, explicitly leverage ranking outcomes to direct funding and prioritize resources towards universities that demonstrate strong performance on global scales. This governmental reliance can lead to policy reforms in higher education, including curriculum modernization, faculty recruitment strategies, and the allocation of research and innovation funding. Concurrently, university administrators meticulously monitor these rankings to maintain competitiveness, attract students, and secure funding. This close observation often translates into strategic decisions, such as allocating more resources to research output, faculty publications, and citation impact, as these metrics significantly influence ranking positions. This dynamic illustrates that rankings possess a significant power beyond simple comparison, potentially leading to a homogenization of institutional priorities, sometimes at the expense of other vital academic missions.
III. Major Global University Ranking Systems and Their Methodologies
Understanding the methodologies of the most influential global university ranking systems is fundamental to interpreting their results and appreciating the nuances of institutional assessment on an international scale.
A. QS World University Rankings
The QS World University Rankings are designed to evaluate university performance based on what QS identifies as core aspects of a university’s mission: teaching, research, fostering employability, and internationalization. Specifically, their subject rankings are determined by five key indicators :
* Academic Reputation: This indicator is derived from a global survey of academics, reflecting which universities are considered excellent for research in particular fields by their peers. The survey results are meticulously filtered according to the narrow area of expertise identified by respondents.
* Employer Reputation: This metric is based on survey responses from graduate employers worldwide. Employers identify institutions they consider excellent for recruiting graduates and specify the disciplines from which they prefer to recruit.
* Research Citations per Paper: This indicator sources all citation data from Elsevier Scopus. A minimum publication threshold is applied for each subject to prevent anomalies from small numbers of highly cited papers, and the weighting for citations is adjusted to reflect prevalent publication and citation patterns within specific disciplines.
* H-index: The H-index serves as a measure of both the productivity and impact of an academic or a university department. It is calculated based on an academic’s most cited papers and the total number of citations these papers have received in other publications.
* International Research Network (by broad faculty area): This index quantifies an institution’s capacity to diversify the geographical reach of its international research collaborations by establishing sustainable partnerships with other higher education institutions globally.
QS offers users various ways to navigate its rankings, allowing for filtering by broad subject areas such as Arts and Humanities, Engineering and Technology, Life Sciences and Medicine, Natural Sciences, and Social Sciences and Management. Users can also view rankings for specific disciplines, with the latest rankings covering 55 subjects ranging from history of art to veterinary science. Furthermore, the platform allows users to filter by individual indicators, enabling them to identify top universities based on specific criteria like academic reputation or employer reputation.
B. Times Higher Education (THE) World University Rankings
The Times Higher Education (THE) World University Rankings aim to comprehensively assess world-class universities across their core missions: Teaching, Research Environment, Research Quality, International Outlook, and Industry Income. THE’s methodology undergoes periodic updates, with notable changes in the 2025 World Reputation Rankings. These updates moved beyond a singular reliance on vote counts to incorporate pairwise comparison and voter diversity, reflecting an evolving understanding of reputation.
The key categories and metrics for the overall THE World University Rankings include:
* Teaching (the learning environment): This metric evaluates the quality of the learning experience and is underpinned by five performance indicators: teaching reputation (derived from the Academic Reputation Survey), staff-to-student ratio, doctorate-to-bachelors ratio, doctorates-awarded-to-academic-staff ratio, and institutional income.
* Research Environment (volume, income, and reputation): This category considers the scale and quality of research activities, measured through research reputation, research income, and research productivity.
* Research Quality (citation impact, research strength, research excellence, and research influence): This category assesses the impact and influence of a university’s research. It includes citation impact, with additional metrics introduced in 2023 for research strength, research excellence, and research influence.
* International Outlook (staff, students, and research): This category highlights an institution’s global competitiveness and its ability to attract a diverse international cohort of students and researchers. It is measured by the proportion of international students, international staff, and the extent of international collaboration.
* Industry Income (income and patents): This category reflects the commercial impact of a university’s research, indicating its industrial value and commitment to supporting local and national economies. It is calculated based on industry income and the number of patents generated.
For the THE World Reputation Rankings 2025, the assessment is structured around three core pillars of evaluation:
* Vote counts: This pillar assesses the number of votes received for both research and teaching. The 2025 methodology introduced a cumulative scoring function to flatten the score curve, allowing for more meaningful comparisons.
* Pairwise comparison: This method encourages voters to consider a broader spectrum of institutions beyond the well-known “super-brands” by asking respondents to rank preselected universities from 1 to 5.
* Voter diversity: This pillar rewards universities that receive votes from a wide array of territories and subject areas, suggesting a more robust and widely recognized reputation.
These pillars are further broken down into six underlying performance indicators with specific weightings: Research vote count (30%), Teaching vote count (30%), Research pairwise comparison (10%), Teaching pairwise comparison (10%), Research voter diversity (10%), and Teaching voter diversity (10%).
The distinct methodologies employed by QS and THE, particularly when contrasted with ARWU, reflect fundamentally different philosophies regarding what constitutes “excellence” in higher education. ARWU’s heavy reliance on highly objective, high-impact research outputs, such as Nobel laureates and publications in elite journals, inherently favors large, established, research-intensive institutions with a long history of producing groundbreaking discoveries. This approach, while seemingly objective, may not fully capture the diverse missions and contributions of all universities. Conversely, QS and THE incorporate more subjective reputational surveys and broader factors like employer perception and internationalization, which can offer a more holistic, though potentially less universally “objective,” view of a university’s standing. THE’s recent modifications to its reputation rankings, by including “pairwise comparison” and “voter diversity,” signal an evolving effort to broaden the scope of reputational assessment beyond mere brand recognition. This indicates a dynamic understanding within these ranking bodies of the multifaceted elements that contribute to a university’s overall standing. This divergence in underlying philosophies means that a university’s numerical rank is highly contextual and depends significantly on which ranking system is consulted. Users must therefore understand these differing priorities to interpret rankings effectively and select the assessment that aligns best with their specific needs and values, whether prioritizing research strength, student experience, or graduate employability. This also creates a strategic environment for universities, as they may choose to align their investments and efforts with the metrics of specific ranking systems they aim to perform well in.
C. Academic Ranking of World Universities (ARWU – ShanghaiRanking)
The Academic Ranking of World Universities (ARWU), initially published in June 2003 by Shanghai Jiao Tong University and copyrighted by ShanghaiRanking Consultancy since 2009, distinguishes itself by utilizing six objective indicators to rank world universities. This emphasis on quantifiable, empirical data sets it apart from other systems.
The key objective indicators used by ARWU are:
* Number of alumni winning Nobel Prizes and Fields Medals: This indicator assesses the quality of education and the long-term impact of the university’s graduates.
* Number of staff winning Nobel Prizes and Fields Medals: This metric reflects the caliber of the faculty and their significant contributions to their respective fields.
* Number of highly cited researchers selected by Clarivate: This indicator measures the influence and recognition of the university’s researchers within the global academic community.
* Number of articles published in journals of Nature and Science: This highlights research output and impact in two of the most prestigious scientific journals worldwide.
* Number of articles indexed in Science Citation Index-Expanded (SCIE) and Social Science Citation Index (SSCI): This evaluates the volume of research publications in widely recognized and influential academic databases.
* Per capita performance of a university: This indicator assesses the academic performance relative to the size of the institution, providing a measure of efficiency and productivity.
ARWU annually ranks more than 2500 universities, with the top 1000 being publicly released. This rigorous, research-focused methodology often results in a consistent top tier dominated by well-established research powerhouses.
D. Other Notable Approaches: CWTS Leiden Ranking
The CWTS Leiden Ranking offers an alternative approach to global university assessment, based exclusively on bibliometric indicators. Compiled annually by the Centre for Science and Technology Studies (CWTS) at Leiden University, this ranking system was established in 2003 as a direct response to ARWU, aiming to demonstrate a more appropriate use of bibliometric data for comparing universities.
The Leiden Ranking assesses universities worldwide based on the volume and citation impact of their academic publications, meticulously accounting for differences in language, discipline, and institutional size. Key indicators include:
* Mean Citation Score (MCS): The average number of citations received by a university’s publications.
* Mean Normalized Citation Score (MNCS): The average number of citations, normalized for field differences and publication year, allowing for fairer comparisons across diverse disciplines.
* Proportion of Top 10% Publications (PP(top 10%)): The percentage of a university’s publications that fall within the top 10% most frequently cited papers in their respective fields and publication years.
Beyond citation impact, the Leiden Ranking also evaluates scientific collaboration, including co-authorship with other institutions, international collaborations, and partnerships with industry. The Leiden Ranking is particularly lauded for its rigor and robustness compared to other university rankings, scoring highly in “rigor” and “measure what matters” in independent evaluations. It provides multiple options for sorting rankings, deliberately avoids subjective reputational surveys, normalizes indicators where appropriate, and represents uncertainties with stability intervals, all contributing to its strong reputation for transparency.
A significant development in this regard is the launch of the Leiden Ranking’s “Open Edition” in 2024. This new edition draws its data from OpenAlex, an open-source, open-access database of research publications, aiming to provide greater transparency and eventually replace the original closed version. This initiative directly addresses a long-standing criticism of traditional, commercially-driven ranking systems like QS, THE, and ARWU, which are often criticized for using “closed, proprietary datasets” and for a general “lack of transparency” regarding their complex algorithms and scoring systems. The opacity of these commercial systems has frequently raised questions about their validity and reliability. The CWTS Leiden Ranking’s proactive move to utilize an open-source database and provide article-level data to demonstrate precisely how scores are calculated represents a significant and commendable response to these criticisms. This shift reflects a growing recognition within the broader academic and ranking communities of the imperative for greater accountability, reproducibility, and verifiability in university assessments. This trend towards open science metrics and transparent methodologies has the potential to fundamentally reshape the future of university rankings. It could exert considerable pressure on traditional, commercial rankers to adopt more transparent practices, or risk facing declining credibility and relevance. Ultimately, this movement fosters a more equitable and verifiable assessment landscape, empowering institutions and researchers to better understand, scrutinize, and even challenge ranking outcomes.
IV. University Rankings by Country/Region: The Canadian Case Study
Examining university rankings within a specific national context, such as Canada, provides valuable insights into how global methodologies are applied and complemented by national assessment frameworks.
A. Overview of Canadian University Rankings
Canada’s higher education system is evaluated by both the major global ranking bodies (QS, THE, ARWU) and prominent national publications, most notably Maclean’s. Maclean’s distinguishes its approach by categorizing Canadian universities into three distinct groups: Primarily Undergraduate, Comprehensive, and Medical Doctoral. This categorization is crucial as it acknowledges and accounts for the inherent differences in institutional missions, levels of research funding, diversity of program offerings, and the breadth and depth of graduate and professional programs across Canadian universities. This nuanced approach provides a more relevant and meaningful national comparison than a single, undifferentiated overall list.
Maclean’s draws its data from comprehensive and publicly available sources to ensure the robustness of its rankings. These sources include Statistics Canada, which provides numbers on faculty and student enrollment, total research income, and five key financial indicators for the fiscal year (operating budget, spending on student services, scholarships and bursaries, library expenses, and acquisitions). Data for social sciences and humanities research grants, as well as medical-science research grants, are obtained directly from the three major federal granting agencies: the Social Sciences and Humanities Research Council (SSHRC), the Natural Sciences and Engineering Research Council of Canada (NSERC), and the Canadian Institutes of Health Research (CIHR). Additionally, Maclean’s gathers information on numerous student and faculty awards from 50 different organizations and conducts a reputational survey. This survey canvasses the opinions of university faculty, senior administrators, and various business professionals across the country, asking them to rate Canada’s universities in areas such as Highest Quality, Most Innovative, and Leaders of Tomorrow.
AI researched.
So, A warning against some nudity to be prepared for. This starts on the evening tonight continuing through the weekend…
The swamp starts somewhere…
Maybe not a deranged killer, or not a heart attack, or a blown tire or mechanical failure. It could be a distracted driver who has not gotten his ticket as yet.
What ever the reason you are better off to walk on the side of the street that faces the oncoming lane of traffic to avoid being hit.
Remember your injury is not yours alone, you are connected and your loss is our community loss.



When people say there are only two sexes, they are typically referring to a binary biological classification based on the primary function in sexual reproduction. In this model:
* Males are defined by their role in producing small, mobile gametes called sperm. They typically have XY chromosomes and reproductive systems adapted for producing and delivering sperm.
* Females are defined by their role in producing large, relatively immobile gametes called eggs (ova). They typically have XX chromosomes and reproductive systems adapted for producing eggs, fertilization, and often gestation.
This binary understanding of sex is rooted in the observation of these distinct reproductive roles across many species, including humans. It focuses on biological attributes such as:
* Chromosomes: The presence of XX chromosomes is typically associated with female development, while XY is typically associated with male development.
* Gonads: Ovaries in females produce eggs, and testes in males produce sperm.
* Hormones: Estrogen and progesterone are often associated with female development, while testosterone is often associated with male development.
* Anatomy: Differences in internal and external reproductive organs.
It’s important to note that while this binary model is common and reflects the majority of the population, it doesn’t account for the existence of intersex individuals. Intersex is an umbrella term for people born with sex characteristics (chromosomes, hormones, gonads, or anatomy) that don’t fit typical binary notions of male or female. These variations are naturally occurring.
Furthermore, it’s crucial to distinguish between sex (a biological construct) and gender (a social and personal construct related to one’s internal sense of self and how they express that identity). While sex is often assigned at birth based on visible biological characteristics, gender identity can be more complex and exist beyond a binary of male and female.
Sex
For purposes of financing the disqualified criteria can be only biological. i.e. birth certificate status of physical sex organs.
Queer
The term queer can have nothing to do with either of these criteria but be in the mind of the beholder as a power to dominate exclude or denigrate a particular person, class of persons or place.
Adam Johnson is dead and four decades of neck guards in minor hockey have not persuaded broad usage in the big leagues.

Neck guards in minor hockey have a history rooted in a tragic incident. In 1975, Ontario Junior A league goalie Kim Crouch suffered a severe neck laceration from a skate blade. Following this near-fatal injury, his father developed the first neck guard to allow Kim to return to playing. This event spurred the initial adoption of neck guards, particularly in Canadian minor hockey leagues.
Over the following decades, neck guards became increasingly common in minor hockey across Canada. Many provincial and local hockey organizations mandated their use for players under a certain age. The Bureau de normalisation du Québec (BNQ) certification became a standard, ensuring the quality and protective capability of neck guards used in Canada.
In the United States, the adoption of mandatory neck guards in minor hockey has been less uniform historically, often varying by state or local association. However, following the death of Adam Johnson, who suffered a fatal neck cut in a professional game in England in October 2023, there has been a significant push for mandatory neck protection at all levels. As a result, USA Hockey mandated neck laceration protection for all players under the age of 18, effective August 1, 2024.
Use in Professional Hockey:
Currently, neck guards are not mandatory in the National Hockey League (NHL). While some players choose to wear them, the majority do not. Concerns about comfort, potential hindrance to performance, and tradition are often cited as reasons for this resistance.
However, the conversation around neck guards in professional hockey has intensified following recent serious neck injuries in hockey globally. Some NHL players have begun to wear neck guards voluntarily, and there is ongoing discussion about whether the league and the NHL Players’ Association might eventually agree to make them mandatory, similar to the evolution of helmet and visor rules.
Other professional leagues, such as the American Hockey League (AHL) and leagues in Europe (Germany, Sweden, Finland, and England), have implemented mandatory neck guard policies. The International Ice Hockey Federation (IIHF) also mandates neck guards for all its tournaments, including the World Junior Championships and the World Championships.
Effectiveness and Usefulness:
Neck guards are designed to protect the vulnerable neck and throat area from lacerations caused by skate blades, hockey sticks, and pucks. They are typically made of cut-resistant materials like Kevlar or Spectra fibers, often encased in a comfortable fabric.
Studies and real-life incidents suggest that neck guards can be effective in preventing or reducing the severity of neck lacerations. While no protective equipment can guarantee complete safety, neck guards provide a crucial barrier against sharp objects that could otherwise cause life-threatening injuries to major blood vessels and nerves in the neck.
Have they ever saved anyone from catastrophic injury?
Yes, there are documented cases where neck guards are believed to have saved players from severe or fatal injuries:
* The case of Kim Crouch: His injury in 1975 directly led to the development and subsequent use of neck guards, preventing similar incidents for countless players.
* Recent youth hockey incident in Eastern Ontario: Following Adam Johnson’s death, a young player was reportedly saved by a neck guard after being cut by a skate.
While less publicized, it is plausible that neck guards have prevented numerous other less severe but still significant neck injuries over the years.
In conclusion, the history of neck guards in hockey began with a serious injury that prompted their invention. They have become a standard and often mandatory piece of equipment in minor hockey, particularly in Canada, and are now gaining increased attention and mandatory status in youth hockey in the United States. While not currently mandatory in the NHL, the effectiveness and potential life-saving benefits of neck guards are becoming increasingly recognized, leading to more players choosing to wear them and ongoing discussions about a potential league-wide mandate in the future.
Sportsmen need Sportsmanship all ways.