Skip to main content
Have a personal or library account? Click to login
Smarter Idea Selection: Turning Idea Overload into Innovation Advantage Cover

Smarter Idea Selection: Turning Idea Overload into Innovation Advantage

Open Access
|Apr 2026

Full Article

Midjourney AI prompt by GCO and Get tyImages/Andriy Onufriyenko

Crowdsourcing works – sometimes too well. With today's AI tools and digital platforms, companies can gather thousands of ideas in a matter of days. But the real bottleneck isn’t generating ideas – it's evaluating them. When hundreds or thousands of submissions flood the funnel, even seasoned experts struggle to keep up. Fatigue sets in, attention wanes, and decisions become inconsistent. The question isn’t how to get more ideas; it's how to identify the right ones quickly and reliably.

BOX 1
Our research – developing and testing the ISE curve

We studied 21 enterprise contests with 4,191 ideas from 1,467 ideators and conducted an additional holdout contest with internal and external experts. We used the shortlists of ideas that were created in the contests as a measure of success for the screening models we tested. In the additional contest, we tested whether AI could help experts screen ideas more efficiently – without trying to replace their final judgment. Our best-performing approach, the ISE curve paired with measuring word atypicality, predicted the decisions of internal experts better than external experts did.

The ISE curve, our best screen, serves as a simple visual tool: The horizontal axis shows the share of ideas you screen; the vertical axis shows the false negatives you tolerate. The green-labeled operating points illustrate how leaders pick cuts that match their tolerance and timelines, the benchmark mentioned above (Figure 1). The best model removed 44% of all ideas while sacrificing only 14% of the good ideas. Alternatively, for those unwilling to lose any winning ideas, we also developed a two-step approach that was able to screen out 21% of ideas without sacrificing a single winning idea.

FIGURE 1

The ISE curve plotted against a common managerial threshold

Screening out ideas: word atypicality

As a measure for detecting bad ideas, a simple text-based indicator worked best: word atypicality. This new metric effectively distinguishes poor submissions from promising ones. It measures how much an idea's vocabulary deviates from the contest's common word set and detects ideas that are too vague, off-topic or oddly worded. These are traits commonly found in low-quality proposals. It keeps those ideas that are richer and more relevant.

Our research provides a practical answer: Pair a simple AI screen called the Idea Screening Efficiency (ISE) curve with managerial expertise. This combination helps organizations handle idea overload systematically by cutting review time while keeping control over what managers are willing to miss.

The typical tradeoff in idea screening

At its core, idea screening is a classification problem with asymmetrical risks. Rejecting weak ideas saves time and money; rejecting a strong idea risks losing the next big thing. The more ideas you reject, the higher the risk of also rejecting strong and useful ideas. In traditional idea contests, managers tend to accept screening out 25% of all ideas without sacrificing more than 15% of good ideas or screening out 50% of all ideas without sacrificing more than 30% of good ideas. These percentages can serve as benchmarks for any AI-based screening solution.

“Decide up front how many good ideas you can afford to miss in exchange for speed and cost savings.”

The playbook: how to use the ISE curve – from theory to action

Earlier models dealing with idea screening struggle to mimic expert choices in the real world. Managers need a transparent rule for idea screening that scales, keeps winners and can be governed. The ISE curve is theoretically grounded but can easily be applied in practice by following the steps below. Figure 2 highlights common pitfalls.

FIGURE 2

Pitfalls of AI screening and how to avoid them

Get tyImages/Filograph

Define your loss function

Decide up front how many good ideas you can afford to miss in exchange for speed and cost savings. Depending on the number of available experts and time restrictions, different businesses set different tolerances. Commoditized categories with known requirements can accept a slightly higher miss rate to speed up the process. Early, ambiguous categories should be conservative because you may not want to risk losing the next big thing.

“The AI screen can easily be integrated into ideation contests to save substantial resources and time.”

Use word atypicality as a pragmatic signal

Word atypicality compares the vocabulary of each idea to the contest's overall word set. Ideas with low word atypicality, i.e., ideas that are well anchored, are rated higher by experts. Ideas that rely on idiosyncratic wording with little overlap are more likely to be misaligned or thin. Atypicality is easy to compute and explain.

Pick your operating point on the ISE curve

Use the ISE curve to set your initial cut. If time is tight and categories are mature, start around a moderate screen. If the domain is novel or reputational risk is high, start lighter.

Track, learn and adjust

Whichever point you choose, log the decision and track the downstream effects: How much review time did we save? Log your decision and track outcomes over time – how many hours saved, how many good ideas missed. Avoid the “set-and-forget” trap. Language evolves, teams change, and contest goals differ.

How to integrate the AI screen into the entire ideation workflow

The AI screen can easily be integrated into ideation contests to save substantial resources and time (Figure 3). A steering team starts by running a crowd ideation challenge, collecting ideas for a specific domain, which might yield 3,000 ideas. Traditionally, several experts would read them all, over three weeks. Despite high costs and long durations, the ratings are likely to be inconsistent, and patterns might be missed. Instead, the team could use the suggested AI screen, clarify their loss function and choose an initial ISE cut. They choose a moderate cut on the ISE curve for core categories and a lighter cut for frontier tech. Then, they run the screen, and the system filters the initial idea pool. The resulting ideas can be assigned to focused shortlists for individual expert evaluation. Experts can be assigned to ideas matching their domain expertise. Providing standardized evaluation categories increases transparency and helps track the outcomes. With a reduced list, experts are able to spend time on deeper evaluation, debate single ideas, if necessary, and select winning ideas. This approach assures that lost good ideas stay within tolerance. The time-to-shortlist shrinks substantially, and satisfaction among experts rises because they work on the ideas.

“By pairing simple AI tools with managerial oversight, organizations can screen faster, decide smarter and focus human expertise where it matters most.”

FIGURE 2.

How to integrate the AI screen into an ideation process

IStock/BlackJack3D; Icons: Midjourney AI prompts by GCO

Intelligent selection wins

AI doesn’t replace human judgment in ideation but refines it. The ISE curve offers a transparent, data-driven way to manage idea overload without losing sight of strategic control. By pairing simple AI tools with managerial oversight, organizations can screen faster, decide smarter and focus human expertise where it matters most: recognizing the ideas that truly move the business forward.

Language: English
Page range: 42 - 47
Published on: Apr 8, 2026
In partnership with: Paradigm Publishing Services
Publication frequency: 2 issues per year

© 2026 J. Jason Bell, Christian Pescher, Gerard J. Tellis, Johann Füller, published by Nuremberg Institute for Market Decisions
This work is licensed under the Creative Commons Attribution-NonCommercial 4.0 License.