从大型歌手中寻找最佳匹配词

2024-04-20 04:13:32 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个pandas数据帧,它包含两个名为Potential WordFixed Word的列。Potential Word列包含不同语言的单词,其中包含拼写错误单词和正确单词,Fixed Word列包含与Potential Word对应的正确单词

下面我分享了一些样本数据

^{tb1}$

我的vocab数据帧包含600K唯一行

我的解决方案

key = given_word
glob_match_value = 0
potential_fixed_word = ''
match_threshold = 0.65
for each in df['Potential Word']:
    match_value = match(each, key) # match is a function that returns a 
    # similarity value of two strings
    if match_value > glob_match_value and match_value > match_threshold:
        glob_match_value = match_value
        potential_fixed_word = each

问题

我的代码的问题是它需要花费大量的时间来修复每个单词,因为循环在大的词表中运行。当一个单词在单字上漏掉时,解决一个10~12个单词的句子几乎需要5到6秒。匹配函数执行得很好,因此优化的目标是

我需要优化的解决方案,请在这里帮助我


Tags: 数据keythresholdvaluematch解决方案单词glob
3条回答

Information Retrieval (IR)的角度来看,您需要减少搜索空间。将given_word(如key)与所有潜在单词进行匹配肯定是低效的。 相反,您需要匹配合理数量的候选对象

要找到这样的候选词,您需要索引潜在词s和固定词s

from whoosh.analysis import StandardAnalyzer
from whoosh.fields import Schema, TEXT
from whoosh.index import create_in

ix = create_in("indexdir", Schema(
    potential=TEXT(analyzer=StandardAnalyzer(stoplist=None), stored=True),
    fixed=TEXT(analyzer=StandardAnalyzer(stoplist=None), stored=True)
))
writer = ix.writer()
writer.add_document(potential='E x e m p l e', fixed='Example')
writer.add_document(potential='p i p o l', fixed='People')
writer.add_document(potential='p i m p l e', fixed='Pimple')
writer.add_document(potential='l u n i k', fixed='unique')
writer.commit()

使用此索引,您可以搜索一些候选项

from whoosh.qparser import SimpleParser

with ix.searcher() as searcher:
    results = searcher.search(SimpleParser('potential', ix.schema).parse('p i p o l'))
    for result in results[:2]:
        print(result)

输出是

<Hit {'fixed': 'People', 'potential': 'p i p o l'}>
<Hit {'fixed': 'Pimple', 'potential': 'p i m p l e'}>

现在,您可以match只针对少数候选对象,而不是所有600K

但它并不完美,这是不可避免的权衡,也是IR的基本工作原理。用不同数量的候选者试试这个

我将使用sortedcollections模块。通常,对分拣列表或分拣数据的访问时间是O(log(n))而不是O(n);在您的案例中,19.1946 if/then检查与600000 if/then检查

from sortedcollections import SortedDict

在实现上没有太多变化,因为我认为需要迭代每个单词的潜在单词列表

这里我的目的不是优化匹配函数本身,而是利用多个线程并行搜索

import concurrent.futures
import time
from concurrent.futures.thread import ThreadPoolExecutor
from typing import Any, Union, Iterator

import pandas as pd

# Replace your dataframe here for testing this

df = pd.DataFrame({'Potential Word': ["a", "b", "c"], "Fixed Word": ["a", "c", "b"]})

# Replace by your match function

def match(w1, w2):
    # Simulate some work is happening here
    time.sleep(1)
    return 1

# This is mostly your function itself
# Using index to recreate the sentence from the returned values
def matcher(idx, given_word):
    key = given_word
    glob_match_value = 0
    potential_fixed_word = ''
    match_threshold = 0.65
    for each in df['Potential Word']:
        match_value = match(each, key)  # match is a function that returns a
        # similarity value of two strings
        if match_value > glob_match_value and match_value > match_threshold:
            glob_match_value = match_value
            potential_fixed_word = each
            return idx, potential_fixed_word
        else:
            # Handling default case, you might want to change this
            return idx, ""


sentence = "match is a function that returns a similarity value of two strings match is a function that returns a " \
           "similarity value of two strings"

start = time.time()

# Using a threadpool executor 
# You can increase or decrease the max_workers based on your machine
executor: Union[ThreadPoolExecutor, Any]
with concurrent.futures.ThreadPoolExecutor(max_workers=24) as executor:
    futures: Iterator[Union[str, Any]] = executor.map(matcher, list(range(len(sentence.split()))), sentence.split())

# Joining back the input sentence
out_sentence = " ".join(x[1] for x in sorted(futures, key=lambda x: x[0]))
print(out_sentence)
print(time.time() - start)

请注意,此操作的运行时间将取决于

  1. 单个匹配调用所用的最长时间
  2. 句子中的字数
  3. 工作线程的数量(提示:尝试查看是否可以使用与句子中单词数量相同的数量)

相关问题 更多 >