chunk
stringlengths 20
498
|
|---|
Set of Guidelines 1. Related to Implementing the Data Lake. The Data Lake is a
centralized data repository, regardless of format or structure. It is often used to store
large amounts of raw data that can be used for analytics, machine learning, and other
data-driven applications. When implementing the Data Lake for storing documents, 1t
is important to consider the following guidelines.
|
— Use a Schema-Less Approach. Documents can be semi-structured or unstructured.
Using a schema-less repository allows storing documents in their original format
without a rigid structure, as discussed in [35].
|
— Choose a Distributed File System. The Data Lake can handle large amounts of doc-
uments and is accessed by multiple users at the same time. In this context, a dis-
tributed file system, such as the Hadoop Distributed File System (HDFS) [37], 1s an
appropriate choice for storing raw documents.
|
— Employ a metadata management system. It is usually difficult to understand the
meaning of documents without some context [35]. A metadata management system
can assist to track and manage related data in the Data Lake.
|
In BigQA, we employ the Data Lake to store the raw documents inserted into the
system. Examples of current technologies to store these documents include NoSQL
databases like MongoDB, Apache CouchDB, Azure Cosmos DB, and Oracle NoSQL
Database, as well as Blob services like Amazon S3, Azure Blob Storage, and Google
Cloud Storage. Another possibility is to employ a Data Lakehouse [1] like Databricks
and Delta Lake.
|
Set of Guidelines 2. Related to Implementing the Document Store. The Document
Store is a database that stores textual data converted from documents. Most text data are
either unstructured or semi-structured. The following guidelines should be considered
when implementing the Document Store.
|
54 L. M. Pereira Moraes et al.
— Specify the Schema Design. The schema is the physical implementation of the data
model. It defines how to store data in the Document Store. An example is keeping
the text in a text field, the meta-information in a JISON field, and the relationships in
a graph database. It is defined as follows.
e Text. Refers to textual data usually stored as plain text.
|
e Meta-Information. Concerns to textual data storing meta-information, such as
authors, title, source identifier and collection date.
e Relationships. Refers to textual data storing relationships between other data.
For instance, when two converted data are part of the same book.
— Choose a Data Indexing. Each Document Store tool has a unique method of indexing
data. Choosing an appropriate data indexing guarantees efficient retrieval of docu-
ments based on the schema employed.
|
— Employ a Metadata Management System. Similar to the Data Lake, a metadata man-
agement system can help track and manage related data in the Document Store. The
Data Lake and the Document Store typically share the same metadata management
system.
|
The Document Store can be implemented using NoSQL databases [15] like Elastic-
search and OpenSearch and vector databases [12] like Milvus, Pinecone, Qdrant, and
FAISS. Some factors to analyze when choosing the technology include the size and type
of data to store and the level of scalability required by the Big Data Question Answering
system to develop.
|
Set of Guidelines 3. Related to Building the Big Data Storage Layer. The Big Data
Storage Layer is responsible for storing and processing large volumes of data. Further-
more, 1t must be scalable and fault-tolerant. The layer performs multiple processing,
cleaning, and transformation steps for each document to prepare data to store in the
Document Store. The guidelines to observe to implement the Big Data Storage Layer
are detailed as follows.
|
— Operate with Batch Processing Pattern. Converting the raw data from the Data Lake
to insert in the Document Store involves loading, cleaning, transforming, and ana-
lyzing many documents. Thus, this layer must support batch processing to clean and
transform the documents into converted text data, using batch processing patterns
such as Lambda [11] and Sigma [5].
|
— Operate with Stream Processing Pattern. Stream processing operations are a set of
operations used to process text data in near real-time. Developers optionally imple-
ment stream processing. These operations clean and transform documents before
storing related converted data in the Document Store, using stream processing pat-
terns such as Kappa [38] and Sigma [5].
|
Set of Guidelines 4. Related to Building the Big Querying Layer. In the Big Query-
ing Layer, the Document Retriever retrieves a set of documents relevant to a given
query. The Document Reader reads the retrieved documents and extracts the answers to
the query. When building the Big Querying Layer, it 18 essential to consider the guide-
lines described as follows.
|
BigQA: Big Data Question Answering Architecture 55
|
— Specify Separate or Combined Components. The Document Retriever and the Doc-
ument Reader can be implemented separately. In this context, it can be implemented
using different technologies, but making 1t harder to maintain and integrate. On the
other hand, it can be combined using close solutions. Combining components makes
integration tighter and easier. However, separated solutions offer a greater level of
customization than combined solutions.
|
— Choose the Appropriate Document Reader. When implemented separately, the Doc-
ument Retriever can use different techniques to rank documents. These techniques
include sparse methods like TF-IDF [34] and BM25 [31], as well as dense methods
like DPR [13].
— Choose the Appropriate Document Retriever. When implemented separately, the
Document Reader typically uses a variety of robust NLP techniques to extract the
answers, such as GPT [29], BERT [7] and RoBERTa [19].
|
— Choose the Appropriate Combined. Using combined solutions, choose a solution
that reflects the software used in the company and its use case needs, examples of
private solutions that combine these two components are IBM Watson Discovery,
Amazon Kedra and Sinch AskFrank.
|
— Implementation Scalability. The Document Reader and Retriever should scale to
handle large amounts of data. Examples of parallel computing strategies to imple-
ment these components are Kubernetes [28], Docker Swarm, Red Hat Openshift,
and Amazon Elastic Container Service.
|
Set of Guidelines 5. Related to Building the Insights Layer. The Insights Layer is
responsible for providing insights from the data. Indeed, organizations can achieve valu-
able insights to improve their business performance and decision-making by using the
right tools. When building the Insights Layer, it is essential to follow the guidelines
described as follows.
|
— Choose a Reporting Tool. Reporting tools are used to generate reports that summa-
rize the data. They can track trends and support decisions. There are many reporting
tools available, such as Google Data Studio, Microsoft Power BI, Tableau, Pentaho,
Metabase, Apache Superset, and Kibana.
|
— Choose a Data Analysis Tool. Data analysis tools are used to analyze the data in more
detail. They find patterns, correlations, and outliers. Python, R, and Julia are pro-
gramming languages used for data analysis. SAS, Dataiku, Jupyterlab, and Apache
Zeppelin are examples of data analysis tools.
|
— Choose a Data Mining Tool. Data mining tools are used to discover hidden data
patterns and relationships. These tools can support the investigation of new insights
that would not be visible from the data alone. Examples of data mining tools include
WEKA, KNIME, RapidMiner, and Orange.
5.2 Pipelines for Architecture Instantiation
|
5.2 Pipelines for Architecture Instantiation
In this section, we provide three examples of pipelines for the architecture proposed
in Sect.4, using the guidelines introduced in Sect. 5.1. These pipelines demonstrate the
practical implementation of the proposed BigQA architecture and showocase its imple-
mentation versatility and adaptability.
|
56 L. M. Pereira Moraes et al.
|
The first pipeline, depicted in Fig.2, uses open-source technologies. Regarding the
Big Data Storage Layer, we employ Delta Lakeº as the Data Lake component and Elas-
ticsearch' as the Document Store. Delta Lake is an open-source storage tool that man-
ages large-scale, constantly evolving data lakes with support of big data processing
frameworks, such as Apache Spark. Furthermore, Elasticsearch is an open-source big
|
data text search engine designed to be distributive, scalable, and near real-time capa-
ble [15].
|
In the Big Querying layer, we use the following open-source algorithms: BM25 [31]
as the Document Retriever and ROBERTàa [19] as the Document Reader. As the pipeline
is independent of data input and output, we can employ any components for implement-
ing the Input and Communication Layers. In Fig.2, the Input Layer receives documents
as ISON files, and the Communication Layer supports users interacting through a API
component.
|
By incorporating open-source technologies into BigQOA, we harness the power of
collaborative development and community support, fostering innovation and adaptabil-
ity for the instantiated architecture. Furthermore, BigQA ensures accessibility and flex-
ibility in its implementation.
Input Layer Big Data Storage Layer Big Querying Layer Comr::;:auon
Delta Lake Elasticsearch
&D
JSON
documents
Document Document Document
Store Retriever Reader
<' º '> A J BngÉ RoBERTa
|
<' º '> A J BngÉ RoBERTa
Fig. 2. BigQA pipeline using open-source technologies.
|
Figure 3 depicts the second pipeline, which uses paid private technologies. We
employ the AWS Cloud Service. We use AWS S3 and Amazon OpenSearchº to imple-
ment the Data Lake and the Document Store, respectively. Moreover, we employ Ama-
zon Kendra, a Platform-as-a-Service (PaaS) solution, as the Big Querying Layer, com-
bining the functionalities of the Document Retriever and the Document Reader compo-
nents into one core query engine.
Communication
Layer
Y
; r
aus
D)
|
Communication
Layer
Y
; r
aus
D)
Document Document
Retriever Reader
Input Layer Big Data Storage Layer Big Querying Layer
OpenSearch
Documents
Document
Store Front-end
Fig. 3. BigQA pipeline using paid private technologies.
6 Delta Lake - https://delta.io/.
? Elasticsearch - https://www.elastic.co/.
$ Amazon OpenSearch - https://aws.amazon.com/opensearch-service/.
|
BigQA: Big Data Question Answering Architecture 57
By incorporating paid proprietary technologies into BigQA, we can obtain tailored
support, specialized tools and services. Furthermore, BigQA can provide enterprise-
grade features to enhance its performance, robustness, and reliability while meeting
specific business requirements.
|
Each Big Data Question Answering system may have different requirements, so the
implementation of BigQA does not have to include every layer shown in its architecture
(Sect. 4) Developers should choose the appropriate components and layers according to
their specific requirements. For instance, the pipelines described in Figs. 2 and 3 contain
only four layers and a few components.
|
However, incorporating data mining and data analysis tools into pipelines is
required to support decision-making. The third pipeline, depicted in Fig.4, shows how
the open-source pipeline illustrated in Fig. 2 can be extended with open-source mon-
itoring and reporting tools. The Insights layer uses Metricbeat for metrics, Logstash
for logging, and Kibana for reporting. Metricbeat collects metrics from systems and
|
applications, such as memory and disk usage. Logstash collects, processes, and stores
data from various sources, such as metric and logging sources. It collects metrics from
Metricbeat and stores them in a database format. Kibana is a web-based visualization
tool that can visualize the formatted data from Logstash. It creates charts, graphs, and
dashboards to help managers monitor the system and extract data insights.
|
Communication
Input Layer Layer
Big Data Storage Layer Big Querying Layer
Delta Lake Elasticsearch
o
<' º '> A |3M2.ÉE RoBEReTÉ) -
JSON Document Document Document
documents Store Retriever Reader
Insights Layer
Metricbeat Logstash , Kibana .
.-.JLJKT
Metric Logging Reporting Manager
Tool Tool
Tool
Fig. 4. BIigQA pipeline extending Fig.2 to encompass the Insights Layer.
|
We strongly recommend using innovative approaches and modular components in
the implementation process. Every BigOA component has a specific purpose and can
operate autonomously. As stated by the Agile manifesto [21], the components should
be independently and evolutionarily developed. Furthermore, it is needed promote col-
laboration across times, location, and organizational boundaries to have agile teams to
simplify the development process [36].
6 CaseStudy: Pharmaceutical Company
|
6 CaseStudy: Pharmaceutical Company
In this section, we present a case study to show how to deploy BigQA to enable a
knowledge base containing real-world documents. Our goal is not to perform an exten-
|
58 L. M. Pereira Moraes et al.
sive analysis of the architecture components. Instead, we implement a real-world case
to assess the architecture purpose, following the design principles and guidelines pro-
posed in Sects. 3 and 5, respectively. Section 6.1 describes how to instantiate BigQA.
Section 6.2 details the queries.
6.1 . Architecture Instantiation
|
6.1 . Architecture Instantiation
Figure 2 depicts the BigOA components and layers instantiated in the case study. The
Input Layer contains ISON documents obtained from the training sets of two real-world
datasets: (1) the Stanford Question Answering Dataset (SQuAD) v1.1 [30]; and (11) the
COVID-QA [22]. Besides other open-source components.
|
SQuAD is a valuable question answering dataset for a wide range of domains. There
are more than 18k different converted data. They also include over 87k questions and
answers about Wikipedia articles. The content covers several topics, including phar-
macy, software testing, TV series, car companies, and geology.
|
There are more than 2k questions and answers in COVID-QA, carefully annotated
by biomedical experts. These experts have reviewed 147 scientific articles specifically
focused on COVID-19. This dataset is not open-domain. We incorporated it in Query 3
(Sect. 6.2) as data augmentation to demonstrate how BigQA can effectively incorporate
data from various data source and formats.
|
The Data Lake, in this particular case, does not retain raw documents. The dataset
source has already cleaned and processed the text data, which is now available as ISON
documents. As a result, the ISON documents are just transformed into records that are
directly stored in the Document Store.
|
We employed the Elasticsearch tool to implement the Document Store. Further-
more, we used the Haystack? tool to create the Big Querying and Communication
layers. Haystack is an open-source Python framework that supports different search
engines and includes many state-of-the-art NLP models. We used the well-established
QA algorithms BM25 [31] and RoBERTa [19] as Document Retriever and Document
Reader, respectively. Regarding BM25, it was the best Document Retriever algorithm
|
experimented in Sect.7. The code was written in Python using Jupyter Notebooks and
is available on GitHub'º.
|
6.2 Real-World Queries
We showcase three queries that can be executed on the instantiated architecture
described in Sect. 6.1. We analyzed real-world applications by issuing various queries,
focusing on different aspects. We formulate the queries based on the pharmaceutical
company described in Example 1.
|
The Document Retriever was designed to return the top 20 documents with useful
information for each query. As for the Document Reader, it was set to return the 3 most
probable answers. Thus, there are three possible answers for each question. Data Reader
provides a probability score for each answer. Higher scores indicate more confidence in
? Haystack - https://haystack.deepset.ai/.
1 BigQA codes - https://github.com/leomaurodesenv/big-ga-architecture.
|
BigQA: Big Data Question Answering Architecture 59
the prediction. Moreover, all queries returned answers because we did not check for a
scenario with no answer.
|
Query 1. What law regulates drug marketing in the pharmaceutical industry? This
query represents the interest of pharmacists, marketing, and legal employees in know-
ing about regulatory laws on drug marketing. This query aims to find the name of a
regulatory law, considering that only one document contains the correct answer. We
executed Query 1 on the SQuAD dataset.
|
Table 3 shows the results of Query 1. The first two answers are about pharmaceutical
industry documents, and the last one is about a legal penalty document in the United
States. The answer with the highest probability score is the right and expected answer.
We can conclude that the instantiated architecture could extract the answer to Query |
with a score of about 76%.
Table 3. Query 1: What law regulates drug marketing in the pharmaceutical industry?
|
Answer Document Score
Prescription Drug Pharmaceutical 76.34%
Marketing Act of 1987 | industry
Food and Drug Pharmaceutical 1977%
Administration (FDA) | industry
Torture Capital punishment | 11.01%
Regulation in the United States
Note. Adapted from Moraes et al. [23].
|
Note. Adapted from Moraes et al. [23].
Query 2. When was the Luria-Delbriick? This query represents the interest of micro-
biologists in extracting information about a bacterial experiment for antibiotics, which
occurred in 1943. This query is date-oriented as it specifically searches for a particular
year. It focuses on examining the architecture capability to recognize dates in docu-
ments. We ran Query 2 on the SQuAD dataset.
|
Table4 depicts the results of Query 2. The first retrieved document is related
to antibiotics, while the remaining documents refer to Arnold Schwarzenegger. The
answer with the highest probability score is the right and expected one. Nevertheless,
as the score is less than 50%, the Document Reader struggles to accurately extract the
answer. Typically, when scores are below 50%, the algorithm fails to locate a solution.
|
In spite of this aspect, the instantiated architecture identified the answer to Query 2.
|
Query 3. What is the Novel Coronavirus? This query provides valuable information
for pharmaceutical employees and the external public. We ran Query 3 on the SQUAD
dataset, which we expanded with the COVID-QA dataset. This augmented query type
explores the architecture ability to extract knowledge from new documents from differ-
ent data sources and document formats.
|
Table 5 depicts the results of Query 3. Every returned document refers to the Coro-
navirus and provides a score of over 70%. The first and third answers are the correct
ones. After processing and inserting new documents into the Document Store, the archi-
tecture can retrieve the augmented data to answer Query 3.
|
60 L. M. Pereira Moraes et al.
Table 4. Query 2: When was the Luria-Delbrick?
Answer | Document Score
1943 Antibiotics 29.89%
14 Arnold Schwarzenegger | 6.84%
14 Arnold Schwarzenegger | 3.06%
Note. Adapted from Moraes et al. [23].
Table 5. Query 3: What is the novel Coronavirus?
Answer Document | Score
SARS-CoV-2 COVID-QA | 87.70%
Prevention for 2019 | COVID-QA | 76.78%
SARS-CoV-2 COVID-QA | 71.66%
Note. Adapted from Moraes et al. [23].
|
Note. Adapted from Moraes et al. [23].
6.3 Case Study Discussion
|
6.3 Case Study Discussion
The case study showcased how BigQA can be successfully applied to real-world sce-
narios, utilizing large datasets comprised of Wikipedia articles and FAQ questions
and answers. As discussed in Sect. 6.1, we adapted the instantiated architecture to the
application requirements by implementing only the appropriate layers and components.
Finally, in Sect. 6.2, we presented distinct types of queries analyzing different aspects
related to business applications.
|
The results showed that the system could respond to queries about the name of
the law and augmented data. But, 1t struggled with the probabilities of the date-related
question. In this context, the Document Reader should be fine-tuned with date question
samples to optimize the performance of related questions.
7 Document Retriever Experiments
|
7 Document Retriever Experiments
Because BigQA is agnostic, the Document Retriever can employ any QA algorithm. In
this section, we conduct 60 experiments to evaluate three well-established QA algo-
rithms and investigate their recall score. Our motivation is driven by the fact that
employing a higher recall algorithm leads to better end-to-end querying and answer-
ing performance [13]. Section7.1 details the experiment setup. Section 7.2 discusses
the experiment results.
71 Experiment Setup
|
71 Experiment Setup
We used the same instantiation described in Sect. 5.2 (Fig.2) in the experiments. The
Input Layer received JSON files as dataset documents. The Document Store maintained
these documents using Elasticsearch. We implemented the Document Retriever for eval-
uation. The code was written in Python using Jupyter Notebooks and is available on
GitHub (See footnote 10).
|
BigQA: Big Data Question Answering Architecture
Table 6. Recall results of the Document Retriever algorithms investigated.
|
SQUAD AdversarialQA
BM25 TF-IDF DPR BM25 TF-IDF DPR
k=1 T1.15% 63.72% 48.59% 52.80% 45.92% 33.02%
k=5 91.50% 86.50% 76.66% 69.51% 67.07% 56.81%
k=10 94.43% 92.01% 85.72% 81.35% 81.81% 89.17%
k=15 95.64% 94.46% 89.40% TI147% 78.30% 70.18%
k = 20 96.29% 95.83% 91.38% 84.89% 85.56% 99.43%
DuoRC QASports / Basketball
BM25 TF-IDF DPR BM25 TF-IDF DPR
k=1 TI1.37% 56.34% 21.46% 65.60% 51.87% 28.15%
k = 88.83% 82.41% 36.41% 81.80% 74.52% 50.32%
k=10 91.49% 87.47% 35.83% 87.04% 81.89% 59.65%
|
k=10 91.49% 87.47% 35.83% 87.04% 81.89% 59.65%
k=15 90.59% 85.26% 48.42% 90.79% 86.71% 65.78%
k = 20 93.76% 90.81% 44 . 78% 90.64% 87.51% 68.23%
|
We explored the following well-established QA algoritims: BM25 [31],
TF-IDF [34], and Dense Passage Retriever (DPR) [13]. Furthermore, we used question-
document pairs as input data from the following three open-domain real-world datasets
with different data characteristics.
— SQuAD v1.1 [30]: with 10,570 question-document pair samples.
— AdversarialQA [4]: a QA dataset in which humans have created adverse and com-
|
plex questions, so the models cannot answer these questions easily. The dataset con-
tains a total of 3k samples of question-document pairs.
— DuoRC [33]: a dataset of movie plot questions and answers on articles from
Wikipedia and IMDb, containing 12,845 question-document pair samples.
— QASports [9]: a large dataset containing more than 1.5 million records encompass-
ing various sports domains. We used 23,242 question-document pairs about basket-
ball from this dataset.
61
|
The purpose of the experiments was to evaluate the performance of the QA algo-
rithms to retrieve the correct document for a given question. To this end, we employed
the recall measure. This measure calculates the number of times that a given algorithm
correctly retrieves the desired document out of the total k documents retrieved. We var-
ied the value of & in [1,5,10,15,20]. Literature typically utilizes a value of & that is
|
equal to or greater than 20. We used smaller values to provide fast answers without
sacrificing performance, as business applications need to retrieve fewer documents and
provide accurate answers quickly.
|
62 L. M. Pereira Moraes et al.
7.2 Experiment Results
Table 6 depicts the recall results of the investigated algorithms. The results demonstrate
that the recall score usually increases as the value of k also increases, indicating that
retrieving more documents impacts the decisive probability.
|
In most cases, BM25 provided the best performance. BM25 is an extension of
TF-IDF that incorporates a probabilistic information retrieval model, resulting in an
enhanced recall score. BM25 is a sparser algorithm compared to the dense DPR algo-
rithim. Dense algorithms can be quite costly regarding time and secondary memory
usage. Based on these findings, we implemented BM25 as the Document Retriever for
the case study described in Sect. 6.
|
DPR outperformed BM25 and TF-IDF in the AdversarialQA dataset, providing
higher performance for k values of 10 and 20. DPR better understood the subject and
context of the questions in these cases because dense algorithms are more effective
over complex datasets. Despite these results, we recommend employing BM25 as the
standard algorithm for the Document Retriever.
8 Conclusion
|
In this paper, we introduced a set of design principles for developing reliable and secure
systems based on business, data, and technical aspects. Based on these principles, we
proposed BigQA, the first Big Data Question Answering architecture. BigQA 1s a soft-
ware reference architecture composed of six layers: (1) Input, for document ingestion;
(11) Big Data Storage, for storing and processing textual data; (111) Big Querying, as
|
query engine; (1v) Communication, for the user interface; (v) Security, to provide secu-
rity artifacts; and (iv) Insights, to assist with data analysis. The architecture 1s agnostic,
Le., is independent of programming language, technology, and Question Answering
algorithm.
|
We also outlined guidelines to support teams to develop and employ BigQA. The
guidelines refer to good practices for implementing the Data Lake and the Document
Store components of the Big Data Storage Layer. They also provide procedures for
building the Big Data Storage, Big Querying, and Insights Layers. Furthermore, we
showed three implementation pipelines to demonstrate BigQA versatility and applica-
bility, focusing on using open-source and paid proprietary technologies and incorporat-
|
ing data mining and analysis tools.
|
Moreover, we validated BigQA by implementing a case study in the context of a
pharmaceutical company. We used two real-world datasets: one with Wikipedia arti-
cles and another with frequently asked questions about COVID-I19. We issued different
queries, demonstrating the potential of BigQA in developing real-world applications.
We implemented the BM25 algorithm as Document Retriever since 1t provided the best
results according to our evaluation. In this evaluation, we conducted 60 experiments
|
over four datasets to compare the BM25, TF-IDF, and Dense Passage Retriever algo-
rithms. All code is available on GitHub (See footnote 10).
|
We are currently conducting experiments to assess the performance of different
algorithms to implement the Document Reader. Another future work involves studying
technologies and algorithms to implement the Insights and Security layers. Finally, we
|
BigQA: Big Data Question Answering Architecture 63
plan to analyze new case studies to instantiate BigQA considering different real-world
applications.
|
Acknowledgements. We thank Amaris Consulting, São Paulo Research Foundation (FAPESP),
Brazilian Federal Research Agency CNPq, and Coordenação de Aperfeiçoamento de Pessoal de
Nível Superior, Brazil (CAPES) [Finance Code 001] for supporting this work. P. C. Jardim has
been supported by the grant t2023/08293-9, FAPESP. C. D. Aguiar has been supported by the
grant t2018/22277-8, FAPESP. L. M. P. Moraes has been supported by Amaris Consulting.
References
10.
11.
12.
13.
|
References
10.
11.
12.
13.
. Armbrust, M., Ghodsi, A., Xin, R., Zaharia, M.: Lakehouse: a new generation of open plat-
forms that unify data warehousing and advanced analytics. In: Proceedings of the Conference
on Innovative Data Systems Research, vol. 8 (2021)
|
Ataei, P., Litehfield, A.: NeoMycelia: a software reference architecture for big data systems.
In: Proceedings of the 28th Asia-Pacific Software Engineering Conference, pp. 452462
(2021). https://doi.org/10.1109/APSECS53868.2021.00052
. Athira, P., Sreeja, M., Reghura], P.: Architecture of an ontology-based domain-specific nat-
ural language question answering system. Int. J. Web Semant. Technol. 4(4), article number
31 (2013). https://doi.org/10.48550/ARXIV.1311.3175
|
Bartolo, M., Roberts, A., Welbl, J., Riedel, S., Stenetorp, P.: Beat the AI: investigating adver-
sarial human annotation for reading comprehension. Trans. Assoc. Comput. Linguist. 8, 662—
678 (2020). https://doi.org/10.1162/tacl.a 00338
|
Cassavia, N., Masciari, E.: Sigma: a scalable high performance big data architecture. In:
Proceedings of the 29th Euromicro International Conference on Parallel, Distributed and
Network-Based Processing, pp. 236—239 (2021). https://doi.org/10.1109/PDP52278.2021.
00044
Derras, M,., et al.: Reference architecture design: a practical approach. In: Proceedings of the
13th International Conference on Software Technologies, pp. 633-640 (2018). https://doi.
org/10.5220/0006865006330640
|
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional
transformers for language understanding. In: Proceedings of the 2019 Conference of the
North American Chapter of the Association for Computational Linguistics: Human Lan-
guage Technologies, pp. 4171—4186. Association for Computational Linguistics, Minneapo-
lis (2019). https://doi.org/10.18653/v1/N19-1423
|
Galster, M., Avgeriou, P.: Empirically-grounded reference architectures: a proposal. In: Pro-
ceedings of the Joint ACM SIGSOFT Conference - QoSA and Architecting Critical Systems,
pp. 153—158 (2011). https://doi.org/10.1145/2000259.2000285
Jardim, P., Moraes, L.M.P., Aguiar, C.D.: QASports: a question answering dataset about
sports. In: Proceedings of the Brazilian Symposium on Databases: Dataset Showcase Work-
shop. SBC, Belo Horizonte (2023)
|
Ji, Z., et al.: Survey of hallucination in natural language generation. ACM Comput. Surv.
(2022). https://doi.org/10.1145/3571730
John, T., Misra, P.: Data Lake for Enterprises. Packt Publishing Ltd. (2017)
Johnson, J., Douze, M., Jégou, H.: Billion-scale similarity search with GPUs. IEEE Trans.
Big Data 7(3), 535-547 (2019)
Karpukhin, V,, et al.: Dense passage retrieval for open-domain question answering. arXiv
preprint arXiv:2004.04906 (2020). https://doi.org/10.48550/ARXIV.2004.04906
|
64
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
206.
27.
28.
29.
30.
31.
L. M. Pereira Moraes et al.
Klein, J., Buglak, R., Blockow, D., Wuttke, T., Cooper, B.: A reference architecture for big
data systems in the national security domain. In: Proceedings of the IEEE/ACM 2nd Inter-
national Workshop on Big Data Software Engineering, pp. 51—57 (2016). https://doi.org/10.
1145/2896825.2896834
|
Kononenko, O., Baysal, O., Holmes, R., Godfrey, M.W.: Mining modern repositories with
elasticsearch. In: Proceedings of the 11th Working Conference on Mining Software Reposi-
tories, pp. 328—331 (2014).https://doi.org/10.1145/2597073.2597091
Laney, D., et al.: 3D data management: controlling data volume, velocity and variety. META
Group Res. Note 6(70), 1 (2001)
|
Lepenioti, K., Bousdekis, A., Apostolou, D., Mentzas, G.: Prescriptive analytics: literature
review and research challenges. Int. J. Inf. Manage. 50, 57—70 (2020)
Li, Q., Xu, Z., Wei, H., Yu, C., Wang, S.: General big data architecture and methodology: an
analysis focused framework. In: Debruyne, C., et al. (eds.) OTM 2019. LNCS, vol. 11878,
pp. 33—43. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-40907-4 .4
|
Liu, Y,, et al.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv preprint
arXiv:1907.11692 (2019). https://arxiv.org/abs/1907.11692
Lydia, E.L., Satyanarayan, S., Kumar, K.V., Ramya, D.: Indexing documents with reliable
indexing techniques using Apache Lucene in Hadoop. Int. J. Intell. Enterp. 7(1—3), 203-214
(2020). https://doi.org/10.1504/IJIE.2020.104656
|
Misra, S., Kumar, V., Kumar, U., Fantazy, K., Akhter, M.: Agile software development prac-
tices: evolution, principles, and criticisms. Int. J. Qual. Reliab. Manage. 29(9), 972-980
(2012)
Móller, T., Reina, A., Jayakumar, R., Pietsch, M.: COVID-QA: a question answering dataset
for Covid-19. In: Proceedings of the 1st Workshop on NLP for COVID-I19 at Association for
Computational Linguistics, p. 1 (2020)
|
Moraes, L.M.P,., Jardim, P., Aguiar, C.D.: Design principles and a software reference archi-
tecture for big data question answering systems. In: Proceedings of the 25th International
Conference on Enterprise Information Systems, pp. 57-67. INSTICC, SciTePress (2023).
https://doi.org/10.5220/0011842700003467
|
Múller, M., Vorraber, W., Slany, W.: Open principles in new business models for information
systems. J. Open Innov.: Technol. Mark. Complexity 5(6), 1-13 (2019). https://doi.org/10.
3390/joitme5010006
Nielsen, R.D., et al.: An architecture for complex clinical question answering. In: Proceed-
ings of the Ist ACM International Health Informatics Symposium, pp. 395-399 (2010).
https://doi.org/10.1145/1882992.1883050
|
Novo-Loures, M., Pavon, R., Laza, R., Ruano-Ordas, D., Mendez, J.R.: Using natural lan-
guage preprocessing architecture (NLPA) for big data text sources. Hindawi Sci. Program.
1—13, article id 2390941 (2020)
Petroni, F., et al.: Language models as knowledge bases? In: Proceedings of the Conference
on Empirical Methods in Natural Language Processing and the 9th International Joint Con-
ference on Natural Language Processing, pp. 2463—-2473 (2019). https://doi.org/10.18653/
v1/D19-1250
|
Poniszewska-Maraúda, A., Czechowska, E.: Kubernetes cluster for automating software pro-
duction environment. Sens. J. 21(5), article number 1910 (2021). https://doi.org/10.3390/
s21051910
Radford, A., Narasimhan, K., Salimans, T., Sutskever, L., et al.: Improving language under-
standing by generative pre-training (2018)
Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: SQuAD: 100,000+ questions for machine
comprehension of text. arXiv e-prints arXiv:1606.05250 (2016)
|
Robertson, S.E., Jones, K.S.: Relevance weighting of search terms. J. Am. Soc. Inf. Sci.
27(3), 129—146 (1976). https://doi.org/10.1002/as1.4630270302
|
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
BigQA: Big Data Question Answering Architecture 65
Romualdo, A., Real, L., Caseli, H.: Measuring Brazilian Portuguese product titles similarity
using embeddings. In: Proceedings of the 13th Brazilian Symposium on Information Tech-
nology and Human Language, pp. 121—132. SBC (2021). https://doi.org/10.5753/stil.2021.
17791
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.