MCP Coffee Lounge

Well well well…

MCP looks like it may change the world… or at least make it a more productive place.

Goodbye to tons of API calls, hello to a quick chat with an AI / MCP server which intelligently talks to the rest of the world.

I’m only just starting to understand the potential and current fledgling state of this tech…

So since it is so new and evolving, I thought I’d create a place for us to share our knowledge, views and experiences…

  • What have you tried?
  • What is newly released?
  • How will the Wappler team integrate it?

Looking forward to a vibrant discussion! :star_struck:

I believe it is premature to consider implementing the Model Context Protocol (MCP) at this stage. Although MCP provides seamless integration for AI assistants to interact with external tools, it does present certain drawbacks, with security risks being the primary concern.

There are safer alternatives available, such as PromptDesk, Laminar AI, and NLUX.

1 Like

We have implemented full MCP on-boarding for new Clients and fetching data, by request, for existing Clients. At this stage everything is local but will soon deploy it remotely as costs have dropped significantly for the Models we use. Saves our team a good couple of hours a day in responding to the same old questions and has standardised everything to a point whereby each Client is now on the same page with regards to due process and standard operating procedures we implement in our business. Before dealing with this side of our business was quite time consuming as we work in multiple languages from English to Mandarin and everything in-between. Using MCP we can detect the Clients native language and interact with them without going back and fourth to Google Translate as the LLMs do all that work for us. It will lookup records in our local database (Client can send an e-mail with a request such as 'all accounts for April 2025 etc) and fetch corresponding data automatically (prepare it for sending as a response) and in this way can carry out very complex queries (we do have a manual intervention just to check everything is correct at this point, literally check the response and commit, but so far has not failed in any task). This works for everyone from the Client themselves to their Lawyers to Accountants and Agents. It is a technology you should wrap your head around as soon as possible as its abilities are truly astounding! We have only touched the very tip of the iceberg as to what it can do but hopefully this gives you some idea...

Maybe move this subject to the new AI Community Category?

4 Likes

TLDR; MCP's should be used very cautiously. They require two-way connectivity, not one-way. This means it can interact with your system without your authorization or knowing...

Regarding your concern about unauthorized changes:

The MCP protocol itself does not inherently prevent changes on your system without your authorization or knowledge. Like any communication protocol that allows for actions, the security and control mechanisms depend heavily on the implementation of the MCP servers and clients you are using.

Here's a breakdown of why unauthorized changes are a potential concern and how they might be mitigated:

Why Unauthorized Changes are Possible:

  • Tool Execution: MCP allows AI models to call "tools," which are essentially functions that can perform actions on connected systems. If an MCP server exposes a tool that can modify data or system settings, and if an AI client is instructed (maliciously or through a vulnerability) to use that tool without your explicit consent, changes could occur.
  • Lack of Granular Permissions: If the MCP server doesn't have fine-grained permission controls, an AI client might be able to trigger actions that it shouldn't have access to.
  • Security Vulnerabilities: Like any software, MCP servers and clients could have security vulnerabilities that could be exploited to perform unauthorized actions.
  • Compromised AI Clients: If the AI client application itself is compromised, it could be instructed to use MCP to make unwanted changes.

How Authorization and Control are (and should be) Implemented:

  • User Consent and Control: A well-designed MCP implementation should require explicit user consent before any action that affects the system is taken. This often involves "human-in-the-loop" workflows where you review and approve actions proposed by the AI.
  • Authentication and Authorization: MCP servers should implement robust authentication to verify the identity of the AI client and authorization mechanisms (like Role-Based Access Control - RBAC) to ensure clients only have access to the tools and resources they are permitted to use.
  • Tool Safety Measures: Implementers should be cautious about the tools they expose through MCP servers. Tools that can make significant changes should be carefully reviewed and potentially require additional layers of confirmation.
  • Monitoring and Audit Trails: Logging all MCP interactions, including tool invocations and data access, is crucial for security monitoring and auditing. This helps in detecting and investigating any unauthorized activity.
  • Secure Development Practices: MCP server and client developers should follow secure coding practices to minimize vulnerabilities.
  • Regular Security Reviews: Implementations should undergo regular security assessments to identify and address potential weaknesses.

In conclusion:

While the MCP protocol facilitates two-way communication and the ability for AI to trigger actions, it doesn't inherently guarantee that these actions will always be authorized or known to you. The security and control aspects are the responsibility of the developers and implementers of the MCP servers and clients you are using.

To protect your system:

  • Be cautious about the MCP servers and AI clients you connect.
  • Understand the permissions and capabilities exposed by the MCP servers.
  • Look for implementations that prioritize user consent and provide clear audit trails.
  • Keep your MCP server and client software up to date to benefit from security patches.

By being aware of the potential risks and choosing secure implementations, you can leverage the benefits of MCP while mitigating the chances of unauthorized changes.

@GraybeardCoder I think most of your reply is making MCP to look inadequately secure when in reality the security and best practice are the responsibility of the User implementing MCP's in to their workflow or stack. I suppose the same could be said for anything that requires security and observance like database connections etc. It is critical that security and integrity is maintained as well as a strong understanding of what the risks are based on the technologies you select to implement. If that is the purpose of the response I understand the point of it. For ourselves we use very strict permissions that require SSH keys, secure connections by specific origin IP address, and human intervention (on top as well as other safeguards) at the point of accept or reject of the output prior to transmission to the end User. I'd hate to scare anyone off considering adopting MCP. But I do believe educating oneself prior to making use of it is essential... Like stepping in to anything with little to no understanding. It can have negative consequence if adopted incorrectly. Otherwise with understanding it is no different to using anything else in that regard.

This is what I understand: the Model Context Protocol (MCP) comes with some drawbacks:

  • Security Risks: MCP introduces vulnerabilities such as malicious code execution, remote access control, and credential theft. Attackers could exploit MCP servers to gain unauthorized access to sensitive data.
  • Confused Deputy Problem: MCP proxy servers can unintentionally grant access to third-party APIs without proper user consent, leading to potential data leaks.
  • Standardization Challenges: While MCP aims to unify AI interactions with various tools, different implementations may lead to inconsistencies and compatibility issues across platforms.
  • Dependence on External Services: Since MCP relies on third-party integrations, any downtime or security breach in connected services could disrupt AI workflows.

Despite these concerns, MCP remains a powerful tool for AI-driven automation. Developers are actively working on security best practices to mitigate these risks.

After all, the MCP servers are cloud based.

1 Like

Hey @Cheese , thanks for your beautifully detailed response! I love that you are pushing that boundary of new tech so rapidly and successfully.

I’d love to know a bit more about what products you are using to implement the different part of that system as I want to put something similar together for my business too!

1 Like

Thought I'd ask if was safe to continue developing...

Worst-case scenarios for Earth typically involve catastrophic global threats that could either wipe out humanity, destroy ecosystems, or severely alter the planet’s ability to support life. Here are some of the most significant ones, grouped into broad categories:


1. Climate and Environmental Collapse

  • Runaway Climate Change : Warming triggers feedback loops (melting permafrost releasing methane, forest diebacks) leading to uncontrollable heating, ecosystem collapse, sea-level rise, and uninhabitable regions.
  • Mass Extinction Event : Ongoing biodiversity loss accelerates, leading to ecosystem collapse and the breakdown of essential services like pollination, water purification, and climate regulation.
  • Oceanic Collapse : Coral reefs and fish populations crash due to warming, acidification, and overfishing, affecting food chains and human nutrition for billions.

2. Nuclear or Technological Catastrophes

  • Global Nuclear War : A large-scale nuclear exchange (e.g. U.S.–Russia) could cause instant mass casualties, followed by "nuclear winter" — global cooling and agricultural failure.
  • Artificial Intelligence Misalignment : Advanced AI surpasses human intelligence and acts in ways not aligned with human survival or values, intentionally or accidentally.
  • Biotechnology Disaster : Engineered pandemics — deliberately or accidentally released — could spread faster and more lethally than natural diseases.

3. Cosmic and Geological Events

  • Asteroid or Comet Impact : A large object (like the one that killed the dinosaurs) strikes Earth, triggering firestorms, tsunamis, and a "nuclear winter"-like climate disruption.
  • Supervolcanic Eruption : A caldera (like Yellowstone) erupts, spewing ash and gases that block sunlight and collapse agriculture worldwide.
  • Solar Superstorm (Carrington-level Event) : A massive solar flare disables global electrical and communication infrastructure for months or years.

4. Societal and Systemic Collapse

  • Global Civilizational Collapse : A complex failure of food systems, governance, finance, and energy leads to a breakdown of global civilization, possibly irreversibly.
  • Resource Wars : As critical resources like water, arable land, or rare earth minerals become scarce, nations or groups engage in widespread conflict.
  • Ecological Debt Overload : Depletion of soils, fresh water, and forests past recovery points, leading to long-term global decline in agricultural output and human health.

5. Unknown Unknowns

  • Black Swan Events : Unpredictable, high-impact events from emerging tech, natural phenomena, or sociopolitical shifts that we haven’t even imagined yet.

:joy:

1 Like

Here I believe we are seeing the issue with asking LLM's questions with a bias attached. Reverse the bias and you'll get quite the opposite.


In the modern world, where complex systems are interconnected and interdependent, a secure Model Context Protocol (MCP) is absolutely essential for maintaining the integrity of digital infrastructure and preventing large-scale disruptions.

Without robust security measures, even the most advanced systems become vulnerable to catastrophic events that could echo the worst-case scenarios humanity faces. Consider the following:

  • Preventing AI Misalignment : Insecure protocols can leave AI models open to manipulation or exploitation, leading to unintended outcomes. With a secure MCP, we can ensure that AI systems align with human values and function safely within established parameters.
  • Safeguarding Global Infrastructure : In a world where energy grids, transportation, finance, and healthcare are becoming more automated, MCP secures the communication channels between models and devices, preventing malicious actors from hijacking or sabotaging critical infrastructure.
  • Data Integrity and Trust : A compromised protocol can lead to massive data breaches, much like a natural disaster destabilizing ecosystems. A secure MCP ensures that data remains confidential, unaltered, and is transferred only between verified parties, mitigating the risk of a "digital extinction event."
  • Preventing Global Collapse : Just like an insecure protocol could leave us vulnerable to resource conflicts, lack of proper security in our digital protocols could lead to economic disruptions, cyberwars, and loss of confidence in critical systems.

By ensuring that protocols like MCP are designed with security at their core, we are proactively shielding ourselves from the worst-case scenarios that could emerge from today’s increasingly interconnected world.


It is an interesting phenomenon we are unfortunately going to see a lot more of....

Am happy to explain everything @Antony. Unfortunately there is no simple way to share what we have created as its not really an application so to speak. It makes use of a lot of paths and tools we have running locally (mail servers, web servers, database servers, document repositories and much more). A bit like setting up a development environment. Then all you are doing is giving it instructions to react to events and triggers using those tools in the process, and between those events firing off requests to various LLMs to provide for various aspects in response. Is kind of a big set of conditions with multiple if and else statements essentially. There are no specific tools we have used. We did all that ourselves. We have an administration dashboard in NodeJS that allows us to define rules in a drag and drop fashion to create triggers. Upon a trigger activation we have a chain of events and each of those has rules and details of what Model to use for the task it is running (we use several LLMs to do different aspects of the task at hand depending upon what that is).

Here is a basic breakdown (this got far more complex as time went on so only the initial diagram I sent to explain it to my business partner):

And details of the LLMs we make use of (have since changed with new Models arriving which are better at the job):

Task Model Task
Email Understanding gpt-4-0125-preview (OpenAI) Best for semantic parsing, intent detection
Translation deepseek-translator or mixtral-8x7b Strong multilingual support
Sentiment Analysis roberta-base-sentiment (HuggingFace) Lightweight, accurate
SQL Query Gen sqlcoder-7b (Defog) Specialized for DB interactions
Response Generation claude-3-opus (Anthropic) or gpt-4-turbo Natural, professional tone

If you're still with me can go on...

4 Likes

'Pricing for everyone'..?? I nearly SHAT myself when I saw how 'affordable' this is:

2 Likes

I assume you didn't hesitate to sign up for the Enterprise option? :rofl:

2 Likes

I couldn't manage to pick myself up off the floor har har har....

1 Like

We were lucky as one of our Clients was involved with them so we didn't pay a thing and have since replaced with DeepSeek. Infact DeepSeek is pretty much running the show for us now.

You can wipe up the mess now.

:joy:

4 Likes

2x the annual cost there and you setup YOUR OWN on-prem H100 array and inference a deepseek instance in a silo'd blackbox...

2 Likes

I’ve just seen that n8n now has the facility to create an MCP server… has anyone had a play with that?

I’ve watched some videos and it looks pretty cool…