The pitch is familiar by now. Generative AI will reshape legal work - freeing staff from low-value tasks, and letting firms do more with less. Some of that is already happening. A lot of it isn't. And where it isn't, the reason is rarely the technology itself.
It's the foundations underneath it.
A large language model is, at heart, a very capable reader and writer. If the precedents, know-how, matter records, and templates it can reach are well-organized, current, properly tagged, and trusted, it can do genuinely useful work with them. If they are scattered across shared drives, half-migrated document management systems, personal folders, and out-of-date intranets, no amount of model intelligence will close the gap. The output will be plausible but require detailed checking, and the promised productivity gain will quietly disappear into review time.
This is what the questions in the Legal AI Reality Check are designed to surface. We want to know, in concrete terms, where firms actually stand on the foundations that determine whether AI delivers: the state of their knowledge, the clarity of their policies, where partners and associates agree (and don't) on risk, and the practical barriers keeping good pilots from becoming everyday practice.
The survey is built around five short pillars:
It takes about five minutes and is completely anonymous. If you choose to leave your details at the end, we'll send you the full report before it goes public, with Vable's own take on what the findings mean for the firms that want AI to actually pay off.
If you work in or with a law firm, please take five minutes. The more answers we get, the clearer the picture of where the real foundation work still needs to happen.