<rss version="2.0">
  <channel>
    <title>permissions on Venkado Blog</title>
    <link>https://venkado.org/categories/permissions/</link>
    <description></description>
    
    <language>en</language>
    
    <lastBuildDate>Sat, 25 Apr 2026 09:19:02 +0200</lastBuildDate>
    
    <item>
      <title>Context aware human approval for AI assistants</title>
      <link>https://venkado.org/2026/04/25/context-aware-human-approval-for/</link>
      <pubDate>Sat, 25 Apr 2026 09:19:02 +0200</pubDate>
      
      <guid>http://lynxai.micro.blog/2026/04/25/context-aware-human-approval-for/</guid>
      <description>&lt;p&gt;In order to do something meaningful, an agent needs to interact with the world using other systems. If an generative artificial intelligence (GenAI) is given the option to do so, that is called &amp;ldquo;using a tool&amp;rdquo; or more technical: tool calling or function calling. For this some control is needed to ensure the tool is used when and how it was intended. In combination with a restrictive environment you can get a good fundamental  level of protection.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;AI applications ask for permissions. Over time those grants accumulate towards execution patterns that are frequently used. The fact that a granted permission was good once does not necessarily make it acceptable next time. &lt;strong&gt;If permissions are given with context, time constraint and impact limit&lt;/strong&gt;, the intention is much clearer. In the &lt;strong&gt;Relagent permission model&lt;/strong&gt; a grant does not only have a &lt;strong&gt;session or global scope&lt;/strong&gt; but also a &lt;strong&gt;maximum sensitivity limit&lt;/strong&gt;. Since privacy is a primary concern for this agent, tying the grant to the potential negative outcome of loosing personal data gives the user a realistic change to consider the trade offs.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;A huge part why AI agents  are are so useful, is their access to tools: Their connection to the world. As most tools, their use can be leading to something good or bad. The issue is not the tool or the agent alone, it&amp;rsquo;s how both work together.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Why a sandbox is not enough&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;A fundamental security principle to keep systems safe is to only give access that is needed (see &lt;a href=&#34;https://en.wikipedia.org/wiki/Principle_of_least_privilege&#34;&gt;Principle of least privilege&lt;/a&gt;). For example does a web research require internet access, but should not be able to use your local camera. A climate control system in contrast should not be allowed to freely access the internet.
Set&amp;rsquo;s say you isolate your agent and tools good enough so that they can do what they are supposed to do, and not more. Which is a hard enough task in itself. You still can not predict what information will go through the allowed channels. Does private information leak to web form your agent tries to fill out for you? Does some malicious text found on the internet end up in the email the agent is composing for you? A restrictive box around your agents does not know, it operates on the wrong layer.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id=&#34;what-do-you-sign-off-with-a-stamp-of-approval&#34;&gt;What do you sign off with a stamp of approval?&lt;/h2&gt;
&lt;p&gt;An agent asking you to allow executing a specific tool or command is a very common picture. And for you it is not too difficult to choose &lt;code&gt;yes&lt;/code&gt; or &lt;code&gt;no&lt;/code&gt; if you followed the conversation that lead to that approval question. You might ask yourself something like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Does the action make sense?&lt;/li&gt;
&lt;li&gt;Does it seem safe?&lt;/li&gt;
&lt;li&gt;Is it reasonable?
And often you can almost intuitively answer &lt;code&gt;yes&lt;/code&gt;  and approve the action.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To avoid slowing down productivity usually there is the option to approve now and not ask again. Then the approval remains valid for a longer time, maybe even during future sessions. But:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Is your given approval still that simple if you don&amp;rsquo;t know the context? If you have no idea what the agent tries to accomplish?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The agent does not care, your approval remains valid, your intentions are no longer relevant.&lt;/p&gt;
&lt;p&gt;Security questions are annoying, they need attention, they slow you down. Still that should not push you to avoiding them completely.&lt;/p&gt;
&lt;h2 id=&#34;taking-the-risk-and-context-into-consideration&#34;&gt;Taking the risk and context into consideration&lt;/h2&gt;
&lt;p&gt;I see several concrete aspects that would make a &lt;strong&gt;given approval&lt;/strong&gt; more specific and less unpredictable.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Binding to a context&lt;/strong&gt;: A chat session, a topic, branch or project. That is commonly already the case.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Constraining time&lt;/strong&gt;: The user might be confident for the actions that are foreseeable  right now. But not forever.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Impact limit&lt;/strong&gt;: Adding a constraint like a &amp;ldquo;only if &amp;hellip;&amp;rdquo;. For example only if the monthly costs are within 10,- or as long as no personal data is shared.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;With such attributes in place a user has a better way formalizing the intent. For example the agent then can use the web as long as it does not share sensitive data. Or it could access memory, but only related to a specific project.&lt;/p&gt;
&lt;p&gt;That is the condensed data model used in Relagent Engine:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&#34;&gt;&lt;code class=&#34;language-python&#34; data-lang=&#34;python&#34;&gt;&lt;span style=&#34;color:#66d9ef&#34;&gt;class&lt;/span&gt; &lt;span style=&#34;color:#a6e22e&#34;&gt;Grant&lt;/span&gt;(BaseModel, frozen&lt;span style=&#34;color:#f92672&#34;&gt;=&lt;/span&gt;&lt;span style=&#34;color:#66d9ef&#34;&gt;True&lt;/span&gt;):
    approval_type: ApprovalType &lt;span style=&#34;color:#f92672&#34;&gt;=&lt;/span&gt; ApprovalType&lt;span style=&#34;color:#f92672&#34;&gt;.&lt;/span&gt;OutgoingData
    component: str &lt;span style=&#34;color:#f92672&#34;&gt;|&lt;/span&gt; &lt;span style=&#34;color:#66d9ef&#34;&gt;None&lt;/span&gt; &lt;span style=&#34;color:#f92672&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#e6db74&#34;&gt;&amp;#34;web_search&amp;#34;&lt;/span&gt;
    allowed_parameters: dict[str, str &lt;span style=&#34;color:#f92672&#34;&gt;|&lt;/span&gt; int &lt;span style=&#34;color:#f92672&#34;&gt;|&lt;/span&gt; bool] &lt;span style=&#34;color:#f92672&#34;&gt;=&lt;/span&gt; {}
    wildcard_parameter: str &lt;span style=&#34;color:#f92672&#34;&gt;|&lt;/span&gt; &lt;span style=&#34;color:#66d9ef&#34;&gt;None&lt;/span&gt; &lt;span style=&#34;color:#f92672&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#66d9ef&#34;&gt;None&lt;/span&gt;
    max_sensitivity: SensitivityLevel &lt;span style=&#34;color:#f92672&#34;&gt;=&lt;/span&gt; SensitivityLevel&lt;span style=&#34;color:#f92672&#34;&gt;.&lt;/span&gt;OpenInformation,
    expires_at: datetime &lt;span style=&#34;color:#f92672&#34;&gt;|&lt;/span&gt; &lt;span style=&#34;color:#66d9ef&#34;&gt;None&lt;/span&gt; &lt;span style=&#34;color:#f92672&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#66d9ef&#34;&gt;None&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;In the user interface it looks like this:
&lt;img src=&#34;https://venkado.org/uploads/2026/3f306b2151.png&#34; alt=&#34;app-0.1.22-pre-approval-search.png&#34;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;You still need isolation on system level&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Given you get a really good permission setup that limits the tool usage exactly to what you intent to do. And you manage to separate data source so that trusted and unknown origins do not mix. Which is a non-trivial problem. There still are a lot of unplanned things that can happen outside the how and when tools are called. Your tools might do more than they advertise (hidden features, supply chain attacks). Functionality can change over time (updates, cloud service). Software has bugs (that won&amp;rsquo;t change any time soon). And of course someone might put effort into manipulating your system by feeding it malicious data. Having guards that prevent your system from doing actions you never intended it to do, might be the last defense to stop such bad actions.&lt;/p&gt;
&lt;/blockquote&gt;
</description>
    </item>
    
  </channel>
</rss>