跳至主要内容

同步 HTTP 接口

同步 HTTP 接口允许您将请求参数和语音数据发送到服务器,然后接收语音识别结果作为响应。

使用方法

发送语音识别请求

Endpoint 根据是否保存日志而有所不同。

POST https://acp-api.amivoice.com/v1/recognize   (保存日志)
POST https://acp-api.amivoice.com/v1/nolog/recognize (不保存日志)

关于两者的区别,请参阅日志保存

请求参数中的必需参数,即认证信息、连接引擎名称和语音数据分别指定为:

  • u={认证信息}
  • d={连接引擎名称}
  • a={二进制音频数据}

作为参数名称,通过 multipart POST 发送到服务器。二进制的音频数据必须放在 HTTP multipart 的最后一个部分。

让我们使用 curl 命令实际发送语音识别请求。使用会話_汎用引擎(-a-general)对样本中包含的音频文件(test.wav)进行语音识别,可以按以下方式操作。这里我们连接到"不保存日志"的 endpoint,不会在服务器上保留音频日志。

curl https://acp-api.amivoice.com/v1/nolog/recognize \
-F u={APP_KEY} \
-F d=-a-general \
-F a=@test.wav
multipart POST 请求的 HTTP header 和 HTTP body 结构

结构如下:

POST https://acp-api.amivoice.com/v1/recognize
Content-Type: multipart/form-data;boundary=some-boundary-string

--some-boundary-string
Content-Disposition: form-data; name="u"

(这个部分存储<APPKEY>)
--some-boundary-string
Content-Disposition: form-data; name="d"

-a-general
--some-boundary-string
Content-Disposition: form-data; name="a"
Content-Type: application/octet-stream

(最后一个部分存储音频数据)
--some-boundary-string--
注意

a 参数之后设置的参数将被忽略。

例如,如果像下面这样将 u 参数放在最后,将会导致认证错误。

curl https://acp-api.amivoice.com/v1/nolog/recognize \
-F d=-a-general \
-F a=@test.wav \
-F u={APP_KEY} # 在a之后指定u

响应

{
"results": [
{
"tokens": [],
"tags": [],
"rulename": "",
"text": ""
}
],
"text": "",
"code": ":"-",
"message":"received illegal service authorization"
}

同样,如果像下面这样将 d 参数放在最后,将会出现找不到指定语音识别引擎的错误。

curl https://acp-api.amivoice.com/v1/nolog/recognize \
-F u={APP_KEY} \
-F a=@test.wav \
-F d=-a-general # 在a之后指定d

响应

{
"results": [
{
"tokens": [],
"tags": [],
"rulename": "",
"text": ""
}
],
"text": "",
"code": "!",
"message": "failed to connect to recognizer server (can't find available servers)"
}

关于响应,请参阅语音识别结果

指定音频格式

如果发送的音频不是带有 header 的音频数据(如 WAV 或 Ogg),则需要指定音频格式。音频格式需要在请求参数c 后面设置。

  • c={音频格式}

可以指定的音频格式请参阅音频格式对应表

例如,要发送采样率为 16kHz、量化位数为 16 位、字节顺序为小端的音频文件 test.pcm,可以在参数 c 中指定 LSB16K,如下所示:

curl https://acp-api.amivoice.com/v1/recognize \
-F u={APP_KEY} \
-F d=-a-general \
-F c=LSB16K \
-F a=@test.pcm
multipart POST 请求的 HTTP header 和 HTTP body 结构

结构如下:

POST https://acp-api.amivoice.com/v1/recognize
Content-Type: multipart/form-data;boundary=some-boundary-string

--some-boundary-string
Content-Disposition: form-data; name="u"

(这个部分存储<APPKEY>)
--some-boundary-string
Content-Disposition: form-data; name="d"

-a-general
--some-boundary-string
Content-Disposition: form-data; name="c"

LSB16K
--some-boundary-string
Content-Disposition: form-data; name="a"
Content-Type: application/octet-stream
(最后一个部分存储音频数据)
--some-boundary-string--

多个参数

如果您想设置除必需参数外的其他请求参数,例如 profile ID(profileId),可以在 d 参数中设置多个参数,如下所示:

d=<key>=<value> <key>=<value> <key>=<value> ...
  • 每个<key>=<value>对应使用半角空格或换行符分隔。
  • 连接引擎名称是必需的,所以在这种情况下,请将 grammarFileNames=-a-general 指定为 key。

例:

curl https://acp-api.amivoice.com/v1/recognize \
-F u={APP_KEY} \
-F d="grammarFileNames=-a-general profileId=:user01" \
-F a=@test.wav

上述"<key>=<value>"中的<value>需要进行 URL 编码。 例如,如果要在 profileWords 中设置一个显示为 "www",读音为 "とりぷるだぶる" 的单词,需要将显示和读音之间的空格编码为 %20,将 とりぷるだぶる 编码为 %E3%81%A8%E3%82%8A%E3%81%B7%E3%82%8D%E3%81%A0%E3%81%B6%E3%82%8B

curl https://acp-api.amivoice.com/v1/recognize \
-F u={APP_KEY} \
-F d="grammarFileNames=-a-general profileWords=hogehoge%20%E3%81%A8%E3%82%8A%E3%81%B7%E3%82%8D%E3%81%A0%E3%81%B6%E3%82%8B" \
-F a=@test.wav
备注
  • 请使用 UTF-8 字符编码
  • 这里的 URL 编码使用的是半角空格转换为"%20"而不是"+"的方式

将参数作为 URL 查询字符串发送的情况

a 以外的参数,即 cdu,既可以作为 URL 查询字符串发送,也可以在 HTTP body 中以 multipart 方式发送。

备注
  • 为了避免触及 HTTP header 大小限制,建议以 multipart 方式发送所有参数。
  • 如果在 URL 查询字符串和 multipart 中都指定了相同的参数,查询参数的值将优先。
  • 虽然 u 可以作为查询字符串发送,但由于可能留在通信路径的日志等中而泄露,请务必以 multipart 方式在 HTTP body 中发送。

如果将 d 参数作为查询字符串发送,需要再次对 d 参数的值进行 URL 编码。

https://acp-api.amivoice.com/v1/recognize?d=grammarFileNames%3D-a-general%20profileWords%3Dhogehoge%2520%25E3%2581%25BB%25E3%2581%2592%25E3%2581%25BB%25E3%2581%2592%25E3%2581%25A6%25E3%2581%2599%25E3%2581%25A8

"%" 转换为 "%25","=" 转换为 "%3D",半角空格转换为 "%20"。

结果

执行成功后,将得到如下 JSON 格式的结果:

{"results":[{"tokens":[{"written":"\u30a2\u30c9\u30d0\u30f3\u30b9\u30c8\u30fb\u30e1\u30c7\u30a3\u30a2","confidence":1.00,"starttime":522,"endtime":1578,"spoken":"\u3042\u3069\u3070\u3093\u3059\u3068\u3081\u3067\u3043\u3042"},{"written":"\u306f","confidence":1.00,"starttime":1578,"endtime":1866,"spoken":"\u306f"},{"written":"\u3001","confidence":0.72,"starttime":1866,"endtime":2026,"spoken":"_"},{"written":"\u4eba","confidence":1.00,"starttime":2026,"endtime":2314,"spoken":"\u3072\u3068"},{"written":"\u3068","confidence":1.00,"starttime":2314,"endtime":2426,"spoken":"\u3068"},{"written":"\u6a5f\u68b0","confidence":1.00,"starttime":2426,"endtime":2826,"spoken":"\u304d\u304b\u3044"},{"written":"\u3068","confidence":1.00,"starttime":2826,"endtime":2938,"spoken":"\u3068"},{"written":"\u306e","confidence":1.00,"starttime":2938,"endtime":3082,"spoken":"\u306e"},{"written":"\u81ea\u7136","confidence":1.00,"starttime":3082,"endtime":3434,"spoken":"\u3057\u305c\u3093"},{"written":"\u306a","confidence":1.00,"starttime":3434,"endtime":3530,"spoken":"\u306a"},{"written":"\u30b3\u30df\u30e5\u30cb\u30b1\u30fc\u30b7\u30e7\u30f3","confidence":1.00,"starttime":3530,"endtime":4378,"spoken":"\u3053\u307f\u3085\u306b\u3051\u30fc\u3057\u3087\u3093"},{"written":"\u3092","confidence":1.00,"starttime":4378,"endtime":4442,"spoken":"\u3092"},{"written":"\u5b9f\u73fe","confidence":1.00,"starttime":4442,"endtime":4922,"spoken":"\u3058\u3064\u3052\u3093"},{"written":"\u3057","confidence":1.00,"starttime":4922,"endtime":5434,"spoken":"\u3057"},{"written":"\u3001","confidence":0.45,"starttime":5434,"endtime":5562,"spoken":"_"},{"written":"\u8c4a\u304b","confidence":1.00,"starttime":5562,"endtime":5994,"spoken":"\u3086\u305f\u304b"},{"written":"\u306a","confidence":1.00,"starttime":5994,"endtime":6090,"spoken":"\u306a"},{"written":"\u672a\u6765","confidence":1.00,"starttime":6090,"endtime":6490,"spoken":"\u307f\u3089\u3044"},{"written":"\u3092","confidence":1.00,"starttime":6490,"endtime":6554,"spoken":"\u3092"},{"written":"\u5275\u9020","confidence":0.93,"starttime":6554,"endtime":7050,"spoken":"\u305d\u3046\u305e\u3046"},{"written":"\u3057\u3066","confidence":0.99,"starttime":7050,"endtime":7210,"spoken":"\u3057\u3066"},{"written":"\u3044\u304f","confidence":1.00,"starttime":7210,"endtime":7418,"spoken":"\u3044\u304f"},{"written":"\u3053\u3068","confidence":1.00,"starttime":7418,"endtime":7690,"spoken":"\u3053\u3068"},{"written":"\u3092","confidence":1.00,"starttime":7690,"endtime":7722,"spoken":"\u3092"},{"written":"\u76ee\u6307\u3057","confidence":0.76,"starttime":7722,"endtime":8090,"spoken":"\u3081\u3056\u3057"},{"written":"\u307e\u3059","confidence":0.76,"starttime":8090,"endtime":8506,"spoken":"\u307e\u3059"},{"written":"\u3002","confidence":0.82,"starttime":8506,"endtime":8794,"spoken":"_"}],"confidence":0.998,"starttime":250,"endtime":8794,"tags":[],"rulename":"","text":"\u30a2\u30c9\u30d0\u30f3\u30b9\u30c8\u30fb\u30e1\u30c7\u30a3\u30a2\u306f\u3001\u4eba\u3068\u6a5f\u68b0\u3068\u306e\u81ea\u7136\u306a\u30b3\u30df\u30e5\u30cb\u30b1\u30fc\u30b7\u30e7\u30f3\u3092\u5b9f\u73fe\u3057\u3001\u8c4a\u304b\u306a\u672a\u6765\u3092\u5275\u9020\u3057\u3066\u3044\u304f\u3053\u3068\u3092\u76ee\u6307\u3057\u307e\u3059\u3002"}],"utteranceid":"20220602/14/018122d637320a301bc194c9_20220602_141433","text":"\u30a2\u30c9\u30d0\u30f3\u30b9\u30c8\u30fb\u30e1\u30c7\u30a3\u30a2\u306f\u3001\u4eba\u3068\u6a5f\u68b0\u3068\u306e\u81ea\u7136\u306a\u30b3\u30df\u30e5\u30cb\u30b1\u30fc\u30b7\u30e7\u30f3\u3092\u5b9f\u73fe\u3057\u3001\u8c4a\u304b\u306a\u672a\u6765\u3092\u5275\u9020\u3057\u3066\u3044\u304f\u3053\u3068\u3092\u76ee\u6307\u3057\u307e\u3059\u3002","code":"","message":""}

识别结果中包含的日语是以 UTF-8 Unicode 转义的形式。您可以使用开发语言中自带的 JSON 解析器等轻松地将其还原。这里我们使用 jq 命令进行转换。

curl -F a=@test.wav "https://acp-api.amivoice.com/v1/recognize?d=-a-general&u=<APPKEY>" | jq

这次,识别结果的日语应该以可读的形式显示,并带有缩进。请在结果中查找 text。这里包含了语音的转写结果。

"text": "アドバンスト・メディアは、人と機械との自然なコミュニケーションを実現し、豊かな未来を創造していくことを目指します。"

以下是完整的响应示例。除了转写结果外,还可以获得单词级别的结果、语音时间和置信度等信息。详细信息请参阅语音识别结果格式

响应
{
"results": [
{
"tokens": [
{
"written": "アドバンスト・メディア",
"confidence": 1,
"starttime": 522,
"endtime": 1578,
"spoken": "あどばんすとめでぃあ"
},
{
"written": "は",
"confidence": 1,
"starttime": 1578,
"endtime": 1866,
"spoken": "は"
},
{
"written": "、",
"confidence": 0.72,
"starttime": 1866,
"endtime": 2026,
"spoken": "_"
},
{
"written": "人",
"confidence": 1,
"starttime": 2026,
"endtime": 2314,
"spoken": "ひと"
},
{
"written": "と",
"confidence": 1,
"starttime": 2314,
"endtime": 2426,
"spoken": "と"
},
{
"written": "機械",
"confidence": 1,
"starttime": 2426,
"endtime": 2826,
"spoken": "きかい"
},
{
"written": "と",
"confidence": 1,
"starttime": 2826,
"endtime": 2938,
"spoken": "と"
},
{
"written": "の",
"confidence": 1,
"starttime": 2938,
"endtime": 3082,
"spoken": "の"
},
{
"written": "自然",
"confidence": 1,
"starttime": 3082,
"endtime": 3434,
"spoken": "しぜん"
},
{
"written": "な",
"confidence": 1,
"starttime": 3434,
"endtime": 3530,
"spoken": "な"
},
{
"written": "コミュニケーション",
"confidence": 1,
"starttime": 3530,
"endtime": 4378,
"spoken": "こみゅにけーしょん"
},
{
"written": "を",
"confidence": 1,
"starttime": 4378,
"endtime": 4442,
"spoken": "を"
},
{
"written": "実現",
"confidence": 1,
"starttime": 4442,
"endtime": 4922,
"spoken": "じつげん"
},
{
"written": "し",
"confidence": 1,
"starttime": 4922,
"endtime": 5434,
"spoken": "し"
},
{
"written": "、",
"confidence": 0.45,
"starttime": 5434,
"endtime": 5562,
"spoken": "_"
},
{
"written": "豊か",
"confidence": 1,
"starttime": 5562,
"endtime": 5994,
"spoken": "ゆたか"
},
{
"written": "な",
"confidence": 1,
"starttime": 5994,
"endtime": 6090,
"spoken": "な"
},
{
"written": "未来",
"confidence": 1,
"starttime": 6090,
"endtime": 6490,
"spoken": "みらい"
},
{
"written": "を",
"confidence": 1,
"starttime": 6490,
"endtime": 6554,
"spoken": "を"
},
{
"written": "創造",
"confidence": 0.93,
"starttime": 6554,
"endtime": 7050,
"spoken": "そうぞう"
},
{
"written": "して",
"confidence": 0.99,
"starttime": 7050,
"endtime": 7210,
"spoken": "して"
},
{
"written": "いく",
"confidence": 1,
"starttime": 7210,
"endtime": 7418,
"spoken": "いく"
},
{
"written": "こと",
"confidence": 1,
"starttime": 7418,
"endtime": 7690,
"spoken": "こと"
},
{
"written": "を",
"confidence": 1,
"starttime": 7690,
"endtime": 7722,
"spoken": "を"
},
{
"written": "目指し",
"confidence": 0.76,
"starttime": 7722,
"endtime": 8090,
"spoken": "めざし"
},
{
"written": "ます",
"confidence": 0.76,
"starttime": 8090,
"endtime": 8506,
"spoken": "ます"
},
{
"written": "。",
"confidence": 0.82,
"starttime": 8506,
"endtime": 8794,
"spoken": "_"
}
],
"confidence": 0.998,
"starttime": 250,
"endtime": 8794,
"tags": [],
"rulename": "",
"text": "アドバンスト・メディアは、人と機械との自然なコミュニケーションを実現し、豊かな未来を創造していくことを目指します。"
}
],
"utteranceid": "20220602/14/018122d65d370a30116494c8_20220602_141442",
"text": "アドバンスト・メディアは、人と機械との自然なコミュニケーションを実現し、豊かな未来を創造していくことを目指します。",
"code": "",
"message": ""
}

其他文档

  • API 参考请查看同步 HTTP 接口
  • 我们提供了一个客户端库(Hrp),它将使用 HTTP 接口时的通信处理和步骤封装成类库,只需实现语音识别应用程序所需的接口,就可以轻松创建语音识别应用程序。首先,请尝试运行示例程序 HrpTester。有关 Hrp 客户端库的接口规范,请参阅客户端库的 Hrp(HTTP 接口客户端)