We-Math
APIa benchmark that evaluates large multimodal models (LMMs) on their ability to perform human-like mathematical reasoning.
0
Very Poor0 reviews
Score Breakdown
0.0
Performance
25%
0.0
Reliability
20%
0.0
Ease of Use
15%
0.0
Value
15%
0.0
Trust
15%
0.0
Delight
10%
Reviews (0)
Write a Review βNo reviews yet
Be the first to review β