HEMANTH LINGAMGUNTA
"Pioneering Creativity and Innovation" I am a polymath—a lifelong learner with a deep and diverse understanding across multiple fields.
- Report this post
HEMANTH LINGAMGUNTA Unlocking the Temporal Dimension in AI: Aristotle's Paradox Meets Modern Machine LearningImagine AI models that don't just process time, but truly understand its enigmatic nature. By infusing Aristotle's temporal paradox into the DNA of LLMs, VLMs, and APIs, we're opening a portal to a new realm of artificial intelligence.Picture models that grasp the ephemeral present, the vanished past, and the potential future - not as mere data points, but as fluid concepts dancing on the edge of existence. This isn't just about improving time-stamped queries; it's about imbuing our digital creations with a philosophical understanding of time itself.From language models that craft narratives with a nuanced grasp of temporal flow, to vision systems that interpret the unfolding of events with unprecedented depth, we're not just advancing technology - we're teaching machines to ponder the very fabric of reality.This fusion of ancient wisdom and cutting-edge AI promises to revolutionize everything from predictive analytics to creative storytelling. Are we ready for AI that doesn't just process time, but contemplates its very nature?#AIPhilosophy #TemporalIntelligence #AristotleMeetsAICitations:[1] Large Language Models Can Learn Temporal Reasoning https://lnkd.in/gDbmWE7r[2] Temporal Reasoning in LLM - Chenhan Yuan https://lnkd.in/gGwiKdBn[3] Temporal quality degradation in AI models - Scientific Reports https://lnkd.in/gGiAVn3a[4] Physics (Aristotle) - Wikipedia https://lnkd.in/ghVbiYnz[5] REST API Examples: TimeSeries Queries - Product Documentation https://lnkd.in/g6REfmxK
1 Comment
Transcript
Aristotle, when he treats time in the Physics, starts with a Riddle that he never answers that goes something like this. Think about time as divided into the past, the present, and the future. And then think for a while about what the present is. How thick is the present? The present is just a limit between the past and the future. And then you get the paradox. Because the past is something that does not exist. It has existed, but it does not exist any longer. The future is something that does not exist. It will exist, but it doesn't exist, and the president does nothing. So time seem to be nothing, dividing something nonexistent from something nonexistent.
HEMANTH LINGAMGUNTA
"Pioneering Creativity and Innovation" I am a polymath—a lifelong learner with a deep and diverse understanding across multiple fields.
2w
- Report this comment
Aristotle's paradox of time arises from his reflections in "Physics." He grapples with the nature of time, noting that it past no longer exists, the future is not yet real, and the present is an indivisible point that constantly moves. This raises the paradoxical question of how time can be both continuous and made up of these indivisible instants, leading to a deeper inquiry into the nature of change and existence.
1Reaction
To view or add a comment, sign in
More Relevant Posts
-
Brinda Gurusamy
Software Engineering, ML at Cisco | UC Berkeley
- Report this post
🗞 The Unreasonable Ineffectiveness of the Deeper LayersThis paper explores layer-pruning strategies for open weight pretrained large language models. To prune these models, they use similarity across layers to identify the optimal block of layers to remove, followed by a healing step that involves a small amount of fine-tuning. Even after removing a significant portion of the model's layers, the models show minimal performance degradation on question-answering tasks. The study shows potential for reducing computing resources during fine-tuning and improving memory and latency constraints in inference. The robustness of LLMs to layer deletion may indicate inefficiencies in leveraging deeper layers or the critical role of shallow layers in knowledge retention.Three key takeaways:⚫ Layer pruning significantly reduces model memory footprint and inference time in proportion to the number of removed layers, while maintaining robust performance.⚫Layer pruning methods can complement other PEFT and quantization strategies to further reduce computational resources.⚫The model's resilience to deep layer removal and impact on downstream tasks emphasize the importance of shallow layers in retaining knowledge.Interesting questions that authors consider worthy of investigation:⏺ What are optimal layer-pruning strategies and effective healing approaches?⏺ How is knowledge distributed across layers, and how can LLMs utilize parameters in their deepest layers more effectively?🔗 https://lnkd.in/gGSWhx85#llm #machinelearning #artificialintelligence #ai #ml
9
1 Comment
Like CommentTo view or add a comment, sign in
-
Andy ThurAI
VP, Constellation Research | HBR/Forbes/VentureBeat author | Startup Advisor | Keynote Speaker | Analyst | Influencer | Thought Leader | Story teller AI | ML | AIOps | MLOps | O11y | CloudOps |
- Report this post
🔥𝗘𝘅𝘁𝗿𝗲𝗺𝗲 𝗟𝗟𝗠 𝗖𝗼𝗺𝗽𝗿𝗲𝘀𝘀𝗶𝗼𝗻: 𝗛𝘆𝗽𝗲 𝗼𝗿 𝗥𝗲𝗮𝗹𝗶𝘁𝘆?🔍I've been digging into the world of "extreme" LLM compression – techniques that squeeze massive language models onto resource-constrained devices. 🤯The current methods (quantization, distillation, etc.) are showing promise, but they still have limitations. 😒 That's why I'm intrigued by two emerging techniques: Additive Quantization for Language Models (AQLM) and PV-Tuning. 🤔AQLM claims to achieve Pareto optimality in accuracy vs. model size, even at extremely low bitrates. PV-Tuning, on the other hand, optimizes the fine-tuning process for extreme compression, pushing the boundaries of what's possible. 📈But here's the burning question: Are these techniques just academic research, or are they ready for real-world deployment? 🤔I'm eager to hear from fellow AI practitioners and researchers. Have you experimented with AQLM or PV-Tuning? What were your results? Share your insights and experiences below! 👇#LLM #AI #MachineLearning #ModelCompression #Quantization #AQLM #PVTuning #DeepLearningLink to the research paper in the first comment below 👇
18
5 Comments
Like CommentTo view or add a comment, sign in
-
A Shreya Sri
Analyst Trainee @ Deloitte USI|Computer Science Graduate @KMIT | Technology Enthusiast
- Report this post
🔍𝐈𝐦𝐩𝐥𝐞𝐦𝐞𝐧𝐭𝐢𝐧𝐠 𝐊𝐨𝐥𝐦𝐨𝐠𝐨𝐫𝐨𝐯–𝐀𝐫𝐧𝐨𝐥𝐝 𝐍𝐞𝐭𝐰𝐨𝐫𝐤𝐬 (𝐊𝐀𝐍): 𝐀 𝐉𝐨𝐮𝐫𝐧𝐞𝐲 𝐓𝐡𝐫𝐨𝐮𝐠𝐡 𝐂𝐨𝐝𝐞 𝐚𝐧𝐝 𝐌𝐚𝐭𝐡𝐞𝐦𝐚𝐭𝐢𝐜𝐚𝐥 𝐃𝐞𝐫𝐢𝐯𝐚𝐭𝐢𝐨𝐧𝐬🔍In the ever-evolving field of machine learning, simplifying complex multivariable functions is a crucial challenge. Kolmogorov–Arnold Networks (KAN), inspired by a profound mathematical theorem that offers a unique approach to function approximation.In my latest article, I explore:- 𝐓𝐡𝐞 𝐊𝐨𝐥𝐦𝐨𝐠𝐨𝐫𝐨𝐯–𝐀𝐫𝐧𝐨𝐥𝐝 𝐓𝐡𝐞𝐨𝐫𝐞𝐦: A groundbreaking discovery that allows us to decompose complex functions into simpler, univariate components.- 𝐒𝐭𝐞𝐩-𝐛𝐲-𝐒𝐭𝐞𝐩 𝐈𝐦𝐩𝐥𝐞𝐦𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧: Detailed code snippets and explanations to guide you through building and training your own KAN.- 𝐑𝐞𝐚𝐥-𝐖𝐨𝐫𝐥𝐝 𝐀𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬: How KAN can be utilized for efficient function approximation.- 𝐂𝐨𝐦𝐩𝐚𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬: The benefits and limitations of KAN compared to large language models (LLMs).Curious about how this innovative approach can transform your work? 𝘾𝙡𝙞𝙘𝙠 𝙩𝙝𝙚 𝙡𝙞𝙣𝙠 𝙞𝙣 𝙩𝙝𝙚 𝙛𝙞𝙧𝙨𝙩 𝙘𝙤𝙢𝙢𝙚𝙣𝙩 𝙩𝙤 𝙧𝙚𝙖𝙙 𝙩𝙝𝙚 𝙛𝙪𝙡𝙡 𝙖𝙧𝙩𝙞𝙘𝙡𝙚.Join the discussion on how KANs could shape the future of AI and scientific discovery.#AI #MachineLearning #Innovation #DataScience #Science
18
1 Comment
Like CommentTo view or add a comment, sign in
-
Piotr Boguszewski
- Report this post
AI WARS: Toolboxes vs Swiss Army Knives 🛠️🔪Intro: There's so much happening in AI nowadays that it's easy to feel overwhelmed. Let me break down some major dimensions of this landscape one by one.When discussing Large Language Models (LLMs), the term "large" relates to the number of parameters—ranging from 7 to over 500 billion. The truth is that there is no definitive measure of what "large" means, and in my opinion, this is a crucial aspect to understand! What really matters are the classes of models:- 🛠️Toolboxes - These gigantic models like GPT-4, LLaMa 3 70B, Claude 3 Opus, Gemini Ultra, etc., boast over 50 billion parameters. Designed to be generalists, they excel across multiple tasks but aren't optimized for specific ones. The trade-off? Their massive size makes them challenging and costly to implement locally, but they're the most powerfull definitely!- 🗡️Swiss Army Knives - These smaller models, like LLaMa 3 8B, Mistral 7B, and Gemini Nano, may not match their larger counterparts in accuracy but shine in flexibility. They're cheaper, faster, easier to deploy, and ideal for fine-tuning and specific applications, even on edge devices. Less powerfull, still good enough in many cases! Which is better? There's no one answer to this question - it depends on your needs. Your project might require the robust capabilities of a Toolbox or the agility of a Swiss Army Knife.Stay tuned for more insights in this series!💪 May the force of AI be with You!#AI #AILandscape #GenerativeAI #BigData #TechTrends #AIInnovation #MachineLearning #AIWars #DataScience
1
Like CommentTo view or add a comment, sign in
-
Bayes Labs
2,045 followers
- Report this post
🚀 Research Paper Highlights: : Introducing Aggregation of Reasoning (AoR) for Enhanced Answer Selection in LLMs by Zhangyue Yin et al. Research done at - Fudan University, National University of Singapore, Shanghai AI Laboratory and Midea AI Research Center.1️⃣ Research Objective: Aggregation of Reasoning (AoR) is a hierarchical framework designed to improve answer selection in large language models (LLMs) for complex reasoning tasks. Unlike traditional ensemble methods that rely on majority voting, AoR emphasizes the quality of reasoning chains over the frequency of answers.2️⃣ Methodology: AoR uses a two-phase evaluation process:Local-scoring: Assesses reasoning chains with the same answer based on logical consistency, appropriateness of method, completeness, clarity, and knowledge application.Global-evaluation: Compares the best chains from each answer group, focusing on the validity of approach, consistency, completeness, clarity, and knowledge application. Dynamic sampling adjusts the number of reasoning chains based on task complexity.3️⃣ Experimental Setup: Experiments utilize GPT-3.5-Turbo-0301 as the backbone LLM, with additional models like GPT-4-0314, Claude-2, LLaMA-2-70B-Chat, and Mixtral-8x7B. The framework is tested across diverse reasoning tasks, including mathematical, commonsense, and symbolic reasoning, using 14 datasets. Detailed implementation and hyperparameter analysis are provided in the appendices.4️⃣ Results and Evaluation: AoR outperforms existing ensemble methods in accuracy and computational efficiency. Performance is measured using accuracy and computational costs based on OpenAI's GPT-3.5-Turbo-0301 API pricing. Experiments were conducted between July and December 2023, with sample sizes limited by rate limits and budget constraints.5️⃣ Ethical Considerations and Limitations: Ethical considerations ensure that AoR does not use personally identifiable information and that experimental prompts are non-discriminatory. Limitations include reliance on manual demonstrations for evaluations and context window size constraints, with optimism that future LLM advancements will address these challenges.Further reading-https://lnkd.in/dngn5zF3🌟 Stay tuned for more updates on upcoming research and analysis in this rapidly evolving landscape of Generative AI.#GenerativeAI #MachineLearning #LLMs #Research #Innovation
9
1 Comment
Like CommentTo view or add a comment, sign in
-
Aman Joshi
Aspiring Data Scientist | AI and Big Data Specialist | Seeking Opportunities in Data Analytics| Golden Gear Studios | Aashayein a ray of hope
- Report this post
🚀 The Synergy of Mathematics, AI, and Big Data: Transforming Our World 🚀I've just published a new article on Medium exploring how the integration of mathematics, artificial intelligence (AI), and big data is driving innovation and transforming our daily lives.In this article, I delve into:• The mathematical foundations of AI and machine learning• The role of big data in modern analytics• Real-world applications transforming industries• Ethical considerations and future challengesWhether you're a tech enthusiast, data scientist, or just curious about the digital age, this article offers valuable insights into the transformative power of these technologies.#AI #BigData #Mathematics #MachineLearning #DataScience #TechInnovation
14
4 Comments
Like CommentTo view or add a comment, sign in
-
Adderbee Research Labs
95 followers
- Report this post
Which one’s better? Math or language-based AI? Hmmmmmm.... 🤔At Adderbee we believe that basic language is the foundation of all effective AI interaction and in order to make technology available to everyone, we are building a semantic cognitive architecture that uses basic language instead of relying on the rigidity of math.This allows our Peer-to-Peer Personal AI to be used by anyone, not just techies. Make sure you visit our website to learn more, and sign up for our waitlist to keep up-to-date: https://lnkd.in/gjutvnUf#AI #AIinnovation #peertopeer
3
Like CommentTo view or add a comment, sign in
-
Jesse H.
GenAI | ML/AI Engineering | Multi-Agent Systems | Knowledge Graphs
- Report this post
Graphs are so powerful to RAG systems. They allow for knowledge based entity relationships rather than bulk patterns in the language. Using them in conjunction with smart chunking and retrieval methods can make unbelievable differences in the way Gen AI can answer questions.
3
Like CommentTo view or add a comment, sign in
-
Pi School
5,005 followers
- Report this post
Every Friday, we bring you Pi AI Weekly Trends. Now we are at number #10! Our new weekly feature brings you the latest AI technology to keep you ahead of the game. This week, we highlight two papers on loss space Sharpness-Aware Minimization (#SAM) and new LLMs by Mistral AI:🔹 Two exciting new papers on avoiding minimising the training loss on sharp points of the loss space to increase generalisation abilities: a universal sharpness measure and a model-agnostic SAM 🔗 https://lnkd.in/dhgZ9W3U - https://lnkd.in/dkuUpTAV🔹 Mathstral and Codestral Mamba are two 7B models by @Mistral, trained in mathematics and code generation, respectively 🔗 https://lnkd.in/dprvp4eP - https://lnkd.in/dduWGTfd🔹 Mistral Nemo is a multilingual 12B model by @Mistral. It has a large context window, state-of-the-art reasoning, and world knowledge abilities 🔗 https://lnkd.in/d7i88YBfOur Senior Deep Learning Scientist, Àlex R. Atrio, selected these links for you.Was this helpful? Let us know by liking and sharing!#MachineLearning #DeepLearning #AI #LLMs #GeneralizationInAI #PiAIWeeklyTrends
17
Like CommentTo view or add a comment, sign in
-
Cohorte
1,839 followers
- Report this post
Theorem proving just took a massive leap forward.DeepSeek-Prover-V1.5 is revolutionizing formal verification in Lean 4, blending language models, reinforcement learning, and tree search in ways never seen before.Here's what makes it stand out:- Enhanced Training: Powered by DeepSeekMath-Base and refined with natural language comments and tactic state data to improve understanding of formal mathematics.- Reinforcement Learning from Proof Assistant Feedback (RLPAF): Feedback-driven optimization, using the Lean 4 prover itself to sharpen proof generation.- RMaxTS: A cutting-edge Monte-Carlo Tree Search method, incentivizing diverse proof exploration in challenging environments.Achievements?- Benchmark-topping results on miniF2F and ProofNet.This paper is more than a milestone—it's a glimpse into the future of automated reasoning.Full paper here: arxiv.org/pdf/2408.08152#DeepSeekProver #TheoremProving #Lean4 #AI #DeepLearning_____________ ✔️ Click "Follow" on the Cohorte page for daily AI engineering news. Credits: Huajian Xin, Z.Z. Ren, Junxiao Song, Zhihong Shao, Wanjia Zhao, Haocheng Wang, Bo Liu, Liyue Zhang, Xuan Lu, Qiushi Du, Wenjun Gao, Qihao Zhu, Dejian Yang, Zhibin Gou, Z.F. Wu, Fuli Luo, Chong Ruan
4
Like CommentTo view or add a comment, sign in
- 3000+ Posts
- 21 Articles
View Profile
FollowMore from this author
- Adapting to Change: Strategies for Business Development in a Post-Pandemic World HEMANTH LINGAMGUNTA 3mo
- Stock Market Analysis HEMANTH LINGAMGUNTA 3mo
- Blockchain in Gaming: What It Means for the Industry HEMANTH LINGAMGUNTA 3mo
Explore topics
- Sales
- Marketing
- IT Services
- Business Administration
- HR Management
- Engineering
- Soft Skills
- See All