运行allennlp训练的模型,并使用webapi提供服务。

allennlp-runmodel的Python项目详细描述


allennlp运行模型

运行一个经过AllenNLP训练的模型,并使用webapi为其提供服务。

用法

运行程序

在Terminator中执行程序,选项--help将显示帮助消息:

$ allennlp-runmodel --help
Usage: allennlp-runmodel [OPTIONS] COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]...  Start a webservice for running AllenNLP models.Options:  -V, --version  -h, --host TEXT                 TCP/IP host for HTTP server.  [default:                                  localhost]  -p, --port INTEGER              TCP/IP port for HTTP server.  [default:                                  8000]  -a, --path TEXT                 File system path for HTTP server Unix domain                                  socket. Listening on Unix domain sockets is                                  not supported by all operating systems.  -l, --logging-config FILE       Path to logging configuration file (JSON,                                  YAML or INI) (ref: https://docs.python.org/l                                  ibrary/logging.config.html#logging-config-                                  dictschema)  -v, --logging-level [critical|fatal|error|warn|warning|info|debug|notset]                                  Sets the logging level, only affected when                                  `--logging-config` not specified.  [default:                                  info]  --help                          Show this message and exit.Commands:  load  Load a pre-trained AllenNLP model from it's archive file, and put        it...

以及

$ allennlp-runmodel load --help
Usage: allennlp-runmodel load [OPTIONS] ARCHIVE

  Load a pre-trained AllenNLP model from it's archive file, and put it into
  the webservice contrainer.

Options:
  -m, --model-name TEXT           Model name used in URL. eg: http://xxx.xxx.x
                                  xx.xxx:8000/?model=model_name
  -t, --num-threads INTEGER       Sets the number of OpenMP threads used for
                                  parallelizing CPU operations. [default: 4(on this machine)]
  -w, --max-workers INTEGER       Uses a pool of at most max_workers threads
                                  to execute calls asynchronously. [default:
                                  num_threads/cpu_count (1 on this machine)]
  -w, --worker-type [process|thread]
                                  Sets the workers execute in thread or
                                  process.  [default: process]
  -d, --cuda-device INTEGER       If CUDA_DEVICE is >=0, the model will be
                                  loaded onto the corresponding GPU. Otherwise
                                  it will be loaded onto the CPU.  [default:
                                  -1]
  -e, --predictor-name TEXT       Optionally specify which `Predictor`
                                  subclass; otherwise, the default one for the
                                  model will be used.
  --help                          Show this message and exit.

load可以多次调用子命令来加载多个模型。

例如:

allennlp-runmodel  --port 8080 load --model-name model1 /path/of/model1.tar.gz load --model-name model2 /path/of/model2.tar.gz

从http客户端进行预测

curl \
  --header "Content-Type: application/json"\
  --request POST \
  --data '{"premise":"Two women are embracing while holding to go packages.","hypothesis":"The sisters are hugging goodbye while holding to go packages after just eating lunch."}'\
  http://localhost:8080/?model=model1

欢迎加入QQ群-->: 979659372 Python中文网_新手群

推荐PyPI第三方库


热门话题
从源WSO2标识服务器生成时发生java错误   java如何在icefaces项目中启用url   java将堆栈中的每个元素打印到文本文件中的新行   java如何将导航抽屉的所有片段放在全屏上?   java EHCache如何实现其事务?   Selenide中的java捕获shouldHave/shouldBe方法   从CSV文件读取java   java如何在ListView行中显示长文本视图高度?   java Tapestry内部重定向到静态页面   TFS使用Java和Eclipse构建   多线程Java线程是否可以在已经持有锁的情况下获得锁?   JavaSwing:使用文档侦听器处理返回键   java在Moshi中保存对象关系