AI feature ideas

1 vote

avatar

Hello,
I saw in a recent RDM update that you’re looking for ideas for AI integration. I have a couple:

1. AI which watches sessions and takes notes:

Basically something parallel to 'session recordings' but where AI annotates and records what's being done (per session, or group of sessions).

That could mean automatic helpdesk or project notes, or even drafts for KB articles (with just the relevant steps, not the entire text log).
Basically, less time spent typing and with better quality.

Beyond that, it opens some cool possibilities:

Interactive recall: “When was this changed, and who changed it?”

AI co-worker mode: It could spot errors or offer suggestions as you work. Like the AI chat in the side-bar, but be 'per session' or 'per session group' and watching, commenting on what you're doing.

Smarter macros: Once the AI understands and the process is validated (approved by people), it could repeat it automatically next time. If not now, this (annotated session recording) would lay the ground-work.

Security angle: If something suspicious happens mid-session, the AI could flag or even halt it (think rogue-access).

Follow-up tasks: the AI could create tickets, projects, etc., or suggest they be made for unrelated issues. This might be done automatically (API to helpdesk, simple email to helpdesk). The AI might recognize these automatically, or only in concert with the user (like a keyboard shortcut + mouse drag causes the AI to recognize what I'm highlighting on-screen is a separate issue for investigation)

Think software like "Problem Step Recorder" and projects like Microsoft's AI-based "OmniParser"


2. Local / Mesh AI

What if each RDM installation were a node in a local AI network?

Every workstation would contribute whatever compute it can, like pooling GPU/CPU resources, so you could run a private, secure LLM locally without sending any data off-site.

Think projects like GPUStack, Distributed-LLaMA, or EXO.

The idea being that together, all your RDM instances form a distributed “local cloud” for AI tasks like KB generation, summarization, report creation, etc. In this the RDM service host (server) would probably hold the system-prompts and AI model - sending a part of a larger model to each WK. Even if this used system RAM instead of GPU RAM, it could process things in batch, after-hours.

Thanks for reading!

All Comments (1)

avatar

Hello,

Thank you for the suggestions and ideas. We will make sure to discuss your ideas internally and see where we go from there. There's a lot of opportunities and possibilities, and we're exploring many different avenues in that sphere. I'll forward this to our developper leading the charge on AI assistant-related development.

Regards,

Hubert Mireault