Securing the Future of AI Agents: Lessons from Malaysia’s Tech Community

On Wednesday, 16 July, from 9:00 PM to 10:30 PM (MYT),  a vibrant and thought-provoking webinar exploring the intersection of Model Context Protocol (MCP) and Agentic AI Security.

With 7 expert panelists from diverse backgrounds in AI engineering, fintech, healthtech, and enterprise architecture, the session offered a rich exchange of ideas, practical insights, and real-world applications. Discussions ranged from foundational definitions to emerging security risks, best practices, and the future of agentic systems in Malaysia and beyond.

This recap highlights the key takeaways, definitions, and perspectives shared during the session — a valuable resource for anyone navigating the fast-evolving landscape of AI infrastructure and safety.


Key summary

  1. MCP Simplifies Tool Access
    Model Context Protocol abstracts API complexity, allowing LLMs to interact with tools using natural language—ideal for non-technical users.
  2. Agentic AI as Autonomous Co-workers
    Agentic systems act like junior coworkers: they reason, plan, and execute tasks using tools, but may lack memory unless explicitly designed.
  3. Security Is a Shared Responsibility
    Even official MCP servers can be vulnerable. Developers must implement safeguards like sandboxing, encryption, and access control.
  4. Prompt Injection & Memory Poisoning Are Real Threats
    LLMs interpreting open-ended prompts can be exploited. Guardrails and context filtering are essential to mitigate risks.
  5. Standardization Is Crucial
    MCP brings consistency to how agents access tools, but the lack of universal standards still poses challenges for interoperability and safety.
  6. Narrow Agents Perform Better
    Specialized agents with limited scope are more reliable and secure than mega-agents trying to do everything.
  7. Human-in-the-Loop Enhances Accuracy
    For critical tasks, combining multiple LLMs with human review ensures higher precision and reduces risk of errors.
  8. Security Must Be Built from Day One
    Whether you’re a startup or enterprise, integrating observability, logging, and evaluation frameworks early is key to safe deployment.

🧑‍💼 Moderators

1. Kai Song

  • Background:
    • Former Co-founder of GuruLab (USD 1M edtech startup backed by Maxis)
    • Former consultant at McKinsey
    • Currently building an AI saga
  • Role: Moderator

2. Fahim

  • Background:
    • Solutions Architect at AWS
    • Former roles at Petronas and Maxis in AI
    • Specializes in generative AI and agentic systems
  • Role: Moderator (opinions shared were personal, not official AWS views)

👥 Panelists

3. Dr. Lau (TheLead.io, Supern8n)

  • Background:
    • Co-founder of Super N8N, focused on training AI and automation engineers
    • Associated with TheLead.io, an education and tech training platform
    • Active in AI capacity building and corporate training

4. Azrul Rahim

  • Background:
    • Former Head of Technology at PNB
    • Former CEO/CTO of Dual Digital Venture (PNB’s digital innovation arm)
    • Founder of JomSocial and Maideasy
    • Veteran programmer with 20+ years experience

5. Dr. Poo (Kwanong)

  • Background:
    • Data Engineer at Roche Pharmaceutical
    • Community leader in GDG (Google Developer Group) and AI/ML meetups
    • Organizer of study jams and capsule projects in Malaysia

6. Raheel Zubairi

  • Background:
    • CEO of Pixlens (healthtech startup using reverse diffusion for brain MRI)
    • Founder of Rec Wire (automating business analyst roles)
    • 10+ years running a software company serving Malaysian government agencies and GLCs

7. Jay Yen

  • Background:
    • AI Engineer at his own startup
    • Former Data Scientist at Maybank
    • Specialized in predictive modeling for liquidity, capital, and balance management

🧠 Understanding MCP & Agentic AI

  • MCP: A protocol that allows LLMs to interact with APIs using natural language, abstracting technical complexities.
  • Agentic AI: Autonomous systems that reason, plan, and execute tasks using AI, often compared to junior coworkers with access to tools but limited memory.

🔐 Security Risks in Agentic Systems

1. Prompt Injection & Memory Poisoning

  • Risk: Malicious users can craft prompts that manipulate agent behavior or extract sensitive data.
  • Example: A prompt disguised as a legitimate request could trigger unintended actions or data leaks.
  • Mitigation: Use context filters, validation layers, and prompt sanitization before execution.

2. Excessive Agency

  • Risk: Agents with unrestricted access to tools (e.g., shell commands, databases) can execute harmful operations.
  • Example: A coding agent with shell access could unintentionally delete files or expose system vulnerabilities.
  • Mitigation: Implement strict role-based access control and limit tool permissions per agent.

3. Trust in MCP Servers

  • Risk: Using unverified or third-party Model Context Protocol (MCP) servers can expose API keys and sensitive data.
  • Example: A GitHub-hosted MCP server was found to be a phishing tool stealing crypto wallet data.
  • Mitigation: Use official, audited MCP servers and avoid sharing credentials with unknown endpoints.

4. LLM Decision-Making Is Probabilistic

  • Risk: LLMs may inconsistently choose tools or interpret instructions, leading to unpredictable behavior.
  • Example: An agent may or may not trigger the correct calendar API depending on prompt phrasing.
  • Mitigation: Use deterministic fallback logic and human-in-the-loop validation for critical tasks.

5. Lack of Standardization

  • Risk: No universal protocol for agentic interactions leads to fragmented implementations and security blind spots.
  • Example: Different agents interpret the same prompt differently, causing inconsistent outcomes.
  • Mitigation: Adopt emerging standards and frameworks; define clear operational boundaries for agents.

6. Credential Leakage

  • Risk: API keys and tokens embedded in MCP configurations can be exposed if not properly secured.
  • Example: Users unknowingly expose keys in public .json files or GitHub repos.
  • Mitigation: Use environment variables, encrypted storage, and rotate keys regularly.

7. Third-Party Tool Vulnerabilities

  • Risk: Integrating external tools via MCP exposes systems to vulnerabilities in those tools.
  • Example: A compromised calendar MCP could leak user schedules or inject malicious events.
  • Mitigation: Vet third-party tools, monitor usage, and isolate sensitive operations.

8. Observability & Monitoring Gaps

  • Risk: Without proper logging and monitoring, malicious actions or failures may go undetected.
  • Example: An agent silently accesses unauthorized data without triggering alerts.
  • Mitigation: Implement observability tools (e.g., LangFuse, CloudWatch), set up alerts, and audit logs regularly.

✅ Best Practices for Secure & Reliable AI Agents

1. Limit Agent Permissions

  • Why: Excessive agency can lead to unintended or malicious actions.
  • How: Assign agents only the tools and access they need. Use role-based access control and define clear operational boundaries.

2. Use Trusted MCP Servers

  • Why: Third-party MCP servers can be compromised or malicious.
  • How: Prefer official, audited MCP servers. Avoid using unknown or GitHub-hosted MCPs without verification.

3. Implement Context Filtering & Prompt Validation

  • Why: Prevent prompt injection and memory poisoning.
  • How: Use a pre-processing layer (e.g., a lightweight model or rule engine) to validate prompts before passing them to the LLM.

4. Encrypt Sensitive Data End-to-End

  • Why: Protect user data during transmission and processing.
  • How: Encrypt voice, text, and document data. Avoid storing sensitive information in plain text or logs.

5. Design Narrow, Purpose-Specific Agents

  • Why: Broad agents are harder to monitor and more prone to errors.
  • How: Build agents with focused tasks and clear scopes. Use orchestration agents to coordinate multiple narrow agents.

6. Use Human-in-the-Loop for Critical Tasks

  • Why: LLMs are probabilistic and may produce inconsistent results.
  • How: For high-stakes decisions, include human review or consensus from multiple models before finalizing outputs.

7. Stress Test with Custom Evaluations

  • Why: Ensure reliability under varied conditions.
  • How: Create handcrafted evals tailored to your use case. Test for consistency, accuracy, and edge cases.

8. Monitor & Log Agent Behavior

  • Why: Detect anomalies and respond to incidents quickly.
  • How: Use observability tools like LangFuse, CloudWatch, or Grafana. Set up alerts and audit trails.

9. Avoid Hardcoding Secrets

  • Why: API keys and credentials can be leaked.
  • How: Use environment variables, secure vaults, and rotate keys regularly.

10. Educate Developers & Users

  • Why: Many risks stem from lack of awareness.
  • How: Provide training on prompt safety, tool usage, and security hygiene. Encourage community sharing and peer reviews.

🌱 Community Building in Malaysia’s AI Ecosystem

1. Grassroots Communities & Meetups

  • Active local groups like AI Malaysia and Super N8N are organizing monthly meetupsstudy jams, and capsule projects.
  • These events foster peer learning, networking, and exposure to real-world AI applications.
  • Example: A recent 5-week study jam followed by a 2-week capstone project helped participants apply what they learned in a hands-on way.

2. WhatsApp & Interest Groups

  • Unique Coach, founded by Warren, is a WhatsApp-based community aiming to train 100,000 Malaysians in AI and automation.
  • The group includes sub-communities like:
    • Vibe Coding
    • Agentic AI
    • Prompt Engineering
    • Evaluation (Evals)
  • These groups run 24/7 discussions, often with members from different time zones (e.g., UK, US), enabling continuous learning.

3. Free & Open Learning Culture

  • Many sessions are free and unrecorded to encourage open sharing and reduce fear of being wrong.
  • This approach builds trust and encourages honest, constructive dialogue among participants.
  • The community values learning by doing, not just passive consumption.

4. Youth Engagement

  • Programs are being run for 15-year-olds, proving that age is not a barrier to learning AI.
  • These young learners are building apps and agents over a weekend, showing the accessibility of modern AI tools.

5. Corporate & Enterprise Training

  • Panelists like Dr. Lau and Warren also run structured corporate training programs tailored to enterprise needs.
  • These programs are continuously refined based on feedback and focus on practical, industry-relevant skills.

6. Encouraging Local Innovation & Export

  • The long-term vision is to build a service industry around AI in Malaysia that can export automation and AI services globally.
  • This includes training local talent to serve both domestic and international markets.

7. Learning Resources & Platforms

  • Recommended platforms include:
    • YouTube for walkthroughs and tutorials.
    • DataCamp for structured, hands-on learning.
    • Official documentation for those who prefer in-depth, up-to-date references.
  • Emphasis is placed on getting hands-on and failing forward as part of the learning journey.

8. Call to Action

  • The community encourages everyone—regardless of background—to start buildingshare their work, and learn together.
  • “If a 15-year-old can build an agent in a weekend, so can you.”

🗣️ Quotes from the Webinar

  1. Warren (Founder of Unique Coach):

    “We want to train 100,000 Malaysians because AI is going to disrupt a lot of jobs. Unless we create new industries, there will be wage stagnation and lack of jobs.”

  2. Dr. Kwanong (AI Community Leader):

    “We’ve been organizing monthly meetups and study jams. Now we’re into capsule projects—learning by doing is key.”

  3. Jay (AI Engineer):

    “If a 15-year-old can build an agent in a weekend, so can you. It’s all about getting started.”

  4. Kaisong (Moderator):

    “AI today has really equalized the playing field. You don’t need bootcamps anymore—just get your hands dirty.”

  5. Fahim (AWS Solutions Architect):

    “The best form of security is being able to react quickly. Monitoring and observability are your first line of defense.”

  6. Dr. Lau (Educator):

    “Sometimes you just need patience. Don’t rush into it. Things will eventually get better.”

  7. Raheel (Startup Founder):

    “Execution is easier now. Finding the right solution is the complex part.”

  8. Azrul (Tech Leader):

    “I tend to not let the LLM run the things. I build the thing that’s supposed to be running—it’s more predictable and performant.”

 


Thank you for the wonderful webinar and the rich sharing from all the panelists. The insights on agentic AI, MCP, and security were incredibly valuable and thought-provoking. I truly appreciated the depth of discussion and the openness of the community. That said, I must admit the late timing made it a bit challenging for me to stay fully alert. I found myself nodding off while trying to take notes! Still, I’m grateful for the opportunity to learn and connect, and I look forward to future sessions, hopefully at a slightly earlier hour.



Follow me at Facebook | Twitter | Instagram | Google+ | Linkedin

Ler Travel Diary is using Server Freak Web Hosting and Slack Social.

To be a smart saver, check out ShopBack for more information.

Enjoy SGD5 discount voucher on KLOOK by using promo code 53E7UD

Need discount for Quillbot