@article{Silva-2019-Recommending,
title = "Recommending Comprehensive Solutions for Programming Tasks by Mining Crowd Knowledge",
author = "Silva, Rodrigo F. G. and
Roy, Chanchal K. and
Rahman, Mohammad Masudur and
Schneider, Kevin A. and
Paix{\~a}o, Kl{\'e}risson V. R. and
Maia, Marcelo de Almeida",
journal = "2019 IEEE/ACM 27th International Conference on Program Comprehension (ICPC)",
year = "2019",
publisher = "IEEE",
url = "https://gwf-uwaterloo.github.io/gwf-publications/G19-77002",
doi = "10.1109/icpc.2019.00054",
abstract = "Developers often search for relevant code examples on the web for their programming tasks. Unfortunately, they face two major problems. First, the search is impaired due to a lexical gap between their query (task description) and the information associated with the solution. Second, the retrieved solution may not be comprehensive, i.e., the code segment might miss a succinct explanation. These problems make the developers browse dozens of documents in order to synthesize an appropriate solution. To address these two problems, we propose CROKAGE (Crowd Knowledge Answer Generator), a tool that takes the description of a programming task (the query) and provides a comprehensive solution for the task. Our solutions contain not only relevant code examples but also their succinct explanations. Our proposed approach expands the task description with relevant API classes from Stack Overflow Q {\&} A threads and then mitigates the lexical gap problems. Furthermore, we perform natural language processing on the top quality answers and then return such programming solutions containing code examples and code explanations unlike earlier studies. We evaluate our approach using 97 programming queries, of which 50{\%} was used for training and 50{\%} was used for testing, and show that it outperforms six baselines including the state-of-art by a statistically significant margin. Furthermore, our evaluation with 29 developers using 24 tasks (queries) confirms the superiority of CROKAGE over the state-of-art tool in terms of relevance of the suggested code examples, benefit of the code explanations and the overall solution quality (code + explanation).",
}
<?xml version="1.0" encoding="UTF-8"?>
<modsCollection xmlns="http://www.loc.gov/mods/v3">
<mods ID="Silva-2019-Recommending">
<titleInfo>
<title>Recommending Comprehensive Solutions for Programming Tasks by Mining Crowd Knowledge</title>
</titleInfo>
<name type="personal">
<namePart type="given">Rodrigo</namePart>
<namePart type="given">F</namePart>
<namePart type="given">G</namePart>
<namePart type="family">Silva</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Chanchal</namePart>
<namePart type="given">K</namePart>
<namePart type="family">Roy</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Mohammad</namePart>
<namePart type="given">Masudur</namePart>
<namePart type="family">Rahman</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Kevin</namePart>
<namePart type="given">A</namePart>
<namePart type="family">Schneider</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Klérisson</namePart>
<namePart type="given">V</namePart>
<namePart type="given">R</namePart>
<namePart type="family">Paixão</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Marcelo</namePart>
<namePart type="given">de</namePart>
<namePart type="given">Almeida</namePart>
<namePart type="family">Maia</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<originInfo>
<dateIssued>2019</dateIssued>
</originInfo>
<typeOfResource>text</typeOfResource>
<genre authority="bibutilsgt">journal article</genre>
<relatedItem type="host">
<titleInfo>
<title>2019 IEEE/ACM 27th International Conference on Program Comprehension (ICPC)</title>
</titleInfo>
<originInfo>
<issuance>continuing</issuance>
<publisher>IEEE</publisher>
</originInfo>
<genre authority="marcgt">periodical</genre>
<genre authority="bibutilsgt">academic journal</genre>
</relatedItem>
<abstract>Developers often search for relevant code examples on the web for their programming tasks. Unfortunately, they face two major problems. First, the search is impaired due to a lexical gap between their query (task description) and the information associated with the solution. Second, the retrieved solution may not be comprehensive, i.e., the code segment might miss a succinct explanation. These problems make the developers browse dozens of documents in order to synthesize an appropriate solution. To address these two problems, we propose CROKAGE (Crowd Knowledge Answer Generator), a tool that takes the description of a programming task (the query) and provides a comprehensive solution for the task. Our solutions contain not only relevant code examples but also their succinct explanations. Our proposed approach expands the task description with relevant API classes from Stack Overflow Q & A threads and then mitigates the lexical gap problems. Furthermore, we perform natural language processing on the top quality answers and then return such programming solutions containing code examples and code explanations unlike earlier studies. We evaluate our approach using 97 programming queries, of which 50% was used for training and 50% was used for testing, and show that it outperforms six baselines including the state-of-art by a statistically significant margin. Furthermore, our evaluation with 29 developers using 24 tasks (queries) confirms the superiority of CROKAGE over the state-of-art tool in terms of relevance of the suggested code examples, benefit of the code explanations and the overall solution quality (code + explanation).</abstract>
<identifier type="citekey">Silva-2019-Recommending</identifier>
<identifier type="doi">10.1109/icpc.2019.00054</identifier>
<location>
<url>https://gwf-uwaterloo.github.io/gwf-publications/G19-77002</url>
</location>
<part>
<date>2019</date>
</part>
</mods>
</modsCollection>
%0 Journal Article
%T Recommending Comprehensive Solutions for Programming Tasks by Mining Crowd Knowledge
%A Silva, Rodrigo F. G.
%A Roy, Chanchal K.
%A Rahman, Mohammad Masudur
%A Schneider, Kevin A.
%A Paixão, Klérisson V. R.
%A Maia, Marcelo de Almeida
%J 2019 IEEE/ACM 27th International Conference on Program Comprehension (ICPC)
%D 2019
%I IEEE
%F Silva-2019-Recommending
%X Developers often search for relevant code examples on the web for their programming tasks. Unfortunately, they face two major problems. First, the search is impaired due to a lexical gap between their query (task description) and the information associated with the solution. Second, the retrieved solution may not be comprehensive, i.e., the code segment might miss a succinct explanation. These problems make the developers browse dozens of documents in order to synthesize an appropriate solution. To address these two problems, we propose CROKAGE (Crowd Knowledge Answer Generator), a tool that takes the description of a programming task (the query) and provides a comprehensive solution for the task. Our solutions contain not only relevant code examples but also their succinct explanations. Our proposed approach expands the task description with relevant API classes from Stack Overflow Q & A threads and then mitigates the lexical gap problems. Furthermore, we perform natural language processing on the top quality answers and then return such programming solutions containing code examples and code explanations unlike earlier studies. We evaluate our approach using 97 programming queries, of which 50% was used for training and 50% was used for testing, and show that it outperforms six baselines including the state-of-art by a statistically significant margin. Furthermore, our evaluation with 29 developers using 24 tasks (queries) confirms the superiority of CROKAGE over the state-of-art tool in terms of relevance of the suggested code examples, benefit of the code explanations and the overall solution quality (code + explanation).
%R 10.1109/icpc.2019.00054
%U https://gwf-uwaterloo.github.io/gwf-publications/G19-77002
%U https://doi.org/10.1109/icpc.2019.00054
Markdown (Informal)
[Recommending Comprehensive Solutions for Programming Tasks by Mining Crowd Knowledge](https://gwf-uwaterloo.github.io/gwf-publications/G19-77002) (Silva et al., GWF 2019)
ACL
- Rodrigo F. G. Silva, Chanchal K. Roy, Mohammad Masudur Rahman, Kevin A. Schneider, Klérisson V. R. Paixão, and Marcelo de Almeida Maia. 2019. Recommending Comprehensive Solutions for Programming Tasks by Mining Crowd Knowledge. 2019 IEEE/ACM 27th International Conference on Program Comprehension (ICPC).