The Year of the Graph

The Year of the Graph

IT Services and IT Consulting

Keeping track of all things Graph Year over Year. Central node for all things Graph: Analytics, AI, DB, Knowledge Graph

About us

Your central node for all things Graph: Graph Analytics, Graph AI, Knowledge Graphs, Graph DBs Newsletter, Report, Resources. Graph analytics, Graph AI, Knowledge Graphs and Graph Databases have been making waves, included in hype cycles recently. The Year of the Graph marked the beginning of it all before the Gartners of the world got in the game. The Year of the Graph is a term coined to convey the fact that the time has come for this technology to flourish. The eponymous article that set the tone was published in January 2018 on ZDNet by domain expert George Anadiotis. The need for knowledge on these technologies is constantly growing. To respond to that need, the Year of the Graph newsletter was released in April 2018. In addition, a constant flow of graph-related news and resources is being shared on social media. To help people make educated choices, the Year of the Graph Database Report was released. The report has been hailed as the most comprehensive of its kind in the market, consistently helping people choose the most appropriate solution for their use case since 2018. The articles, news stream, newsletter, report and resources have been reaching thousands of people, helping them understand and navigate this landscape. The Year of the Graph is their home: Your central node for all things Graph.

Website
https://yearofthegraph.xyz
Industry
IT Services and IT Consulting
Company size
1 employee
Headquarters
Athens
Founded
2018
Specialties
Analytics, Knowledge Graphs, AI, Graph Databases, Market Analysis, Technology, Futurism, Data Models, Data Science, Analysis, Reports, Newsletter, Resources, and Consulting

Updates

  • View organization page for The Year of the Graph, graphic

    3,130 followers

    Knowledge Graph Enlightenment, AI and RAG. The Year of the Graph Newsletter Vol. 26, Summer 2024 A snapshot of the adoption wave for graphs in the real world, and the evolution of their use to support and advance AI - generative or otherwise. Is Knowledge Graph enlightenment here, and what does that mean for AI and RAG? In the previous edition of the YotG newsletter, the wave of Generative AI hype was probably at its all-time high. Today, while Generative AI is still talked about and trialed, the hype is subsiding. Skepticism is settling in, and for good reason. Reports from the field show that only a handful of deployments are successful. At its current state, Generative AI can be useful in certain scenarios, but it’s far from being the be-all and end-all that was promised or imagined. The cost and expertise required to evaluate, develop and deploy Generative AI-powered applications remains substantial. Promises of breakthroughs remain mostly promises. Adoption even by the likes of Google and Apple seems haphazard with half-baked announcements and demos. At the same time, shortcomings are becoming more evident and understood. This is the typical hype cycle evolution, with Generative AI about to take a plunge in the trough of disillusionment. Ironically, it is these shortcomings that have been fueling renewed interest in graphs. More specifically, Knowledge Graphs, as part of RAG (Retrieval Augmented Generation). Knowledge Graphs are able to deterministically deliver benefits. Having preceded Generative AI by many years, Knowledge Graphs are entering a more productive phase in terms of their perception and use. Coupled with proper tools and oversight, Generative AI can boost the creation and maintenance of Knowledge Graphs.   #KnowledgeGraph #AI #LLM #RAG #GenAI #GraphDB #DataScience #Research #EmergingTech Svetlana Sicular Ben Lorica 罗瑞卡 Prashanth Rao Jonathan Larson George Karypis Costas Mavromatis Terence Lucas Yap Neo4j LangChain Chia Jeng Yang Dan Selman He Xiaoxin Writer May Habib Juan Sequeda Dean Allemang Jay (JieBing) Yu, PhD Zhentao Xu Mark Jerome Cruz Xin Luna Dong Tomaz Bratanic Oskar Hane Michael Galkin Michael Bronstein Azmine Toushik Wasi Fedor Borisyuk Bryan Perozzi Adam Ronthal Robin Schumacher, Ph.D. Aerospike FalkorDB Kurt Cagle Steve Hedden Changlong Y. Steven Xu Chaitanya (Sree) Vadrevu Valentin Buchner Rahul N. Qiang Sun Nicolas Hubert

    Knowledge Graph Enlightenment, AI and RAG. The Year of the Graph Newsletter Vol. 26, Summer 2024

    Knowledge Graph Enlightenment, AI and RAG. The Year of the Graph Newsletter Vol. 26, Summer 2024

    The Year of the Graph on LinkedIn

  • View organization page for The Year of the Graph, graphic

    3,130 followers

    GraCoRe: Benchmarking Graph Comprehension and Complex Reasoning in Large Language Models Evaluating the graph comprehension and reasoning abilities of Large Language Models (LLMs) is challenging and often incomplete. Existing benchmarks focus primarily on pure graph understanding, lacking a comprehensive evaluation across all graph types and detailed capability definitions. GraCoRe is an open source benchmark for systematically assessing LLMs' graph comprehension and reasoning.  GraCoRe uses a three-tier hierarchical taxonomy to categorize and test models on pure graph and heterogeneous graphs, subdividing capabilities into 10 distinct areas tested through 19 tasks.  The benchmark includes 11 datasets with 5,140 graphs of varying complexity. Three closed-source and seven open-source LLMs were evaluated, conducting thorough analyses from both ability and task perspectives.  Key findings: 1. Semantic enrichment, i.e., adding meaning to the graph structure, enhances reasoning performance. 2. The order in which nodes are presented impacts task success. 3. The ability to process longer texts doesn't necessarily translate to better graph comprehension or reasoning.  #DataScience #AI #EmergingTech #LLM  https://lnkd.in/dkHN4Frw

    • No alternative text description for this image
  • View organization page for The Year of the Graph, graphic

    3,130 followers

    Foundations and Frontiers of Graph Learning Theory Recent advancements in graph learning have revolutionized the way to understand and analyze data with complex structures. Notably, Graph Neural Networks (GNNs), i.e. neural network architectures designed for learning graph representations, have become a popular paradigm.  With these models being usually characterized by intuition-driven design or highly intricate components, placing them within the theoretical analysis framework to distill the core concepts, helps understand the key principles that drive the functionality better and guide further development.  Given this surge in interest, this article provides a comprehensive summary of the theoretical foundations and breakthroughs concerning the approximation and learning behaviors intrinsic to prevalent graph learning models.  Graph embedding, graph kernels, and GNNs are fundamental approaches for representing and analyzing graph-structured data. While graph embedding and graph kernels have been effective in representing and analyzing graph-structured data, they face challenges in capturing complex graph interactions and may require pre-processing steps. GNNs address these limitations by combining the power of neural networks with the expressive capacity of graphs, enabling end-to-end learning of node and graph representations. Recently, Graph Transformers have emerged as an advanced technique in graph learning, applying self-attention mechanisms to capture long-range dependencies between nodes while incorporating graph structural information. These advancements have opened up new possibilities for understanding graph-structured data in various domains. Encompassing discussions on fundamental aspects such as expressiveness power, generalization, optimization, and unique phenomena such as over-smoothing and over-squashing, this piece delves into the theoretical foundations and frontier driving the evolution of graph learning.  In addition, this article also presents several challenges and further initiates discussions on possible solutions. Link in comments. #DataScience #AI #NeuralNetwork #EmergingTech #GNN

    • No alternative text description for this image
  • View organization page for The Year of the Graph, graphic

    3,130 followers

    Better LLM Integration With Content-Centric Knowledge Graphs Extracting knowledge graphs using a large language model (LLM) is time-consuming and error-prone. These difficulties arise because the LLM is being asked to extract fine-grained, entity-specific information from the content. Inspired by the benefits of vector search, especially the ability to get good results from ingesting content with relatively little cleaning, Ben Chambers explores a coarse-grained knowledge graph — the content knowledge graph — focused on the relationships between content. Link in comments. #KnowledgeGraph #LLM #AI #RAG #GraphRAG #GraphDB #DataScience #EmergingTech

    • No alternative text description for this image
  • View organization page for The Year of the Graph, graphic

    3,130 followers

    How exactly do knowledge graphs work? Before you dive deep into GraphRAG, learn the basics of property graphs - each node and relation can store a structured dictionary of properties. LlamaIndex shares a video explaining property graphs, different ways of constructing them with LLMs, and different ways of querying them complete with diagrams. Link in comments. #KnowledgeGraph #AI #LLM #RAG #DataScience #EmergingScience

    • No alternative text description for this image
  • View organization page for The Year of the Graph, graphic

    3,130 followers

    MM-GRAPH: A Multimodal Graph Benchmark Associating unstructured data with structured information is crucial for real-world tasks that require relevance search.  However, existing graph learning benchmarks often overlook the rich semantic information associated with each node.  The Multimodal Graph Benchmark (MM-GRAPH) is the first comprehensive multi-modal graph benchmark that incorporates both textual and visual information to bridge this gap.  MM-GRAPH consists of five graph learning datasets of various scales that are appropriate for different learning tasks.  Their multimodal node features, enabling a more comprehensive evaluation of graph learning algorithms in real-world scenarios.  To facilitate research on multimodal graph learning, an extensive study on the performance of various graph neural networks in the presence of features from various modalities is provided.  MM-GRAPH aims to foster research on multimodal graph learning and drive the development of more advanced and robust graph learning algorithms.  By providing a diverse set of datasets and benchmarks, MM-GRAPH enables researchers to evaluate and compare their models in realistic settings, ultimately leading to improved performance on real-world applications that rely on multimodal graph data. Link in comments #AI #EmergingTech #NeuralNetwork #GNN #DataScience #Research

    • No alternative text description for this image
  • View organization page for The Year of the Graph, graphic

    3,130 followers

    The Year of the Graph is your central node for all things Graph Knowledge Graphs, Graph Analytics / AI / Data Science / Databases To be always in the know, from industry use cases to the latest research trends Join the ranks of subscribers from the likes of Amazon, Astra Zeneca and Yandex Browse through the back catalog Subscribe to the YotG Newsletter #KnowledgeGraph #AI #DataScience #Analytics #GraphDB #GenAI #LLM #RAG #EmergingTech https://lnkd.in/djt7ZdQ

    • No alternative text description for this image
  • View organization page for The Year of the Graph, graphic

    3,130 followers

    HybridAGI: A Programmable Graph-based Open Source Framework for Neuro-Symbolic AI HybridAGI is an AgentOS framework designed for creating explainable and deterministic agent systems suitable for real-world applications. It is a programmable LM-based Agent that enables defining behavior using a graph-based prompt programming approach. The metaphor its creators use to describe it that if DSPy is the PyTorch of LMs Applications, HybridAGI is the equivalent of Keras for LMs Agents systems.  It's a memory-centric system which centralizes knowledge, documents, programs and traces into a hybrid vector/graph database. HybridAGI is designed for data scientists, prompt engineers, researchers, and AI enthusiasts who love to experiment with AI.  It is a "Build Yourself" product that focuses on human creativity rather than AI autonomy. It's open source, GitHub link available in the comments. Has anyone used it? Let us know what you think. #KnowledgeGraph #AI #LLM #DataScience #Python #OpenSource #GenAI #EmergingTech 

    • No alternative text description for this image
  • View organization page for The Year of the Graph, graphic

    3,130 followers

    Are graph databases obsolete? Can they be replaced by SQL:2023 systems? Two decades ago, Michael Stonebraker co-authored a paper with Joe Hellerstein commenting on the previous 40 years of data modelling research and development . That paper demonstrated that the relational model (RM) and SQL are the prevailing choice for database management systems (DBMSs), despite efforts to replace either them. Instead, SQL absorbed the best ideas from these alternative approaches. Stonebraker and Andy Pavlo revisit this, arguing this has continued since 2005. The RM continues to be the dominant data model and SQL has been extended to capture good ideas from others. The authors expect more of the same in the future. They also discuss DBMS implementations and argue that the major advancements have been in the RM systems, primarily driven by hardware. On graph databases, they note that there has been a lot of academic and industry interest in the last decade, and examine OLTP & analytics use cases and solutions. On analytics, they note that algorithm choice and data representation will determine a DBMS’s performance. This argues for a computing fabric that allows developers to write their own algorithms using an abstraction that hides the underlying system topology. However, previous research shows that distributed algorithms rarely outperform single-node implementations because of communication costs. A better strategy is to compress a graph into a space-efficient data structure that fits in memory on a single node and then run the query against this data structure. All but the largest graph databases are probably best handled this way. They also note that regardless of whether a graph DBMS targets OLTP or OLAP workloads, the key challenge these systems have to overcome is that it is possible to simulate a graph as a collection of tables, which means that RDBMSs are always an option to support graphs. In terms of query formulation and performance in this scenario, the authors claim that: a. SQL:2023 introduced property graph queries (SQL/PGQ) for defining and traversing graphs in a RDBMS. The syntax builds on existing languages and shares aspects of the emerging GQL standard. Thus, SQL/PGQ further narrows the functionality difference between RDBMSs and native graph DBMSs. b. There have been several performance studies showing that graph simulation on RDBMSs outperform graph DBMSs. More recent work showed how SQL/PGQ in DuckDB outperforms a leading graph DBMS by up to 10X. Are graph databases obsolete then? Can they be replaced by SQL:2023 systems? What about ease of use, visualization, exploration, and algorithm support? What about other use cases - for example knowledge management and RAG? What is your take? Are you familiar with the research Stonebraker and Pavlo present? Do you have experience using SQL:2023? Link in comments. #GraphDB #Research #DataModeling #Database #SoftwareArchitecture #DataScience #Analytics

  • View organization page for The Year of the Graph, graphic

    3,130 followers

    The 7 Pain Points of GraphRAG GraphRAG comes with a lot of challenges. A summary by Jérémy Ravenel 1) Data Quality: Inconsistent data, outdated information, and biased datasets. 2) Retrieval Process: Irrelevant information retrieval, missing context, and information overload. 3) Graph Construction: Missing relationships, over-complex graph structure, and incorrect entity linking. 4) LLM Integration: Interfacing structured knowledge and LLMs don't prevent hallucinations, misinterpretation, and inconsistent reasoning. 5) Knowledge Gaps: How do you handle novel entities? 6) Scalability & Performance: Slow processing and high computational needs. 7) Ethical & Privacy Risks: Information exposure and bias amplification. #AI #LLM #GenAI #EmergingTech #DataScience #RAG Link in comments

    • No alternative text description for this image

Affiliated pages

Similar pages