GitHub Copilot is the talk of the town. This (paid) AI tool can make many aspects of coding much easier. Still, it does come with some drawbacks that range from copyright and legal issues to cybersecurity and compatibility. Read on to discover what experts in code and AI think about it. Thanks to Tommaso Allevi, Riccardo Degni, and Mauro Bennici for their collaboration!
What we like about Copilot
According to Senior Fullstack Developer Riccardo Degni, Copilot is a very useful tool for productivity purposes, as the developer can ask the assistant to compile the code for him, i.e., by entering a command input, Copilot completes the algorithm, or the snippet of code, in order to benefit the user in terms of development time. Furthermore, the AI can show the developer any compilation problems, proposing the right solutions.
Sometimes it happens that the developer while writing is blocked by the multitude of high-level reasoning that he must do to make the code work perfectly. Copilot can also come to the rescue thanks to “Pair Programming“, an agile programming technique where there are two figures, the Driver, the one writing the code, and the ‘Observer’, who supervises and reviews writing proposing alternative strategies and solutions to problems.
This, explains Degni, allows devs to focus on the writing without thinking too much about how to write it, saving a lot of time. Also, although it is more accurate with English, Copilot is able to suggest code in different languages, since it has a translation tool that allows the developer to write in the requested language.
For example, through the comment “Translations” the word “Hello” will be automatically translated by Copilot into all available languages, it will then be up to the developer to accept the right solution for his job.
Summing up the pros of Copilot:
- It makes some repetitive tasks easier.
- It can help you against “coder’s block”
- Can help you understand some coding languages you don’t know that well.
- Some multilanguage support
What we don’t like about Copilot
However, Degni also points out some of the cons of Copilot: being a still “young” tool, complications can arise during the writing phase, for this reason, this tool is not suitable for junior developers, as they could run into senseless suggestions or wrong code predictions. Copilot can even create security problems that only an expert developer could avoid. Therefore this tool is not recommended for those who do not have much experience in terms of writing code, as they may not fully understand the text suggested by the tool, and this could cause serious damage.
Furthermore, Copilot does not test the code, and the inexperienced eye may not notice suggestion errors which are then highlighted in the test phase, wasting a lot of precious time for the developer who will later have to detect and correct the error.
Tommaso Allevi, Software Architect at Digitech Solutions, also explains some of the downsides of code “suggestions” by the AI tool: “You start writing something, Copilot writes its code, so you stop your thoughts, read the code, think ‘almost right’, and change the proposed code. If you write the code by yourself probably you probably consume less energy.” The result is the disruption of your natural and logical thinking.
Copilot is also able to translate lines of code into various languages, but, being based on open-source repositories, the translations are not guaranteed to be precise, since the AI behind this tool “learns” the terms it finds in the repository and sometimes these may be wrong.
It is also a fairly expensive paid subscription service, so it’s important to understand if it is worth buying, unless you are professionals who develop very actively over the months.
This tool is not exempt from criticism beyond code security and usefulness: “Being based on open-source repositories, some have accused Microsoft of ‘stealing’ other people’s code for profit. This has led to reflections on the part of users, in fact, many wonder how they can be sure that they are using truly open-source code and not protected by licenses or patents which could cost them a dispute for copyright infringement.” explains Degni.
This shouldn’t be underestimated, since the legally incorrect use of a regularly patented code could cost unaware users dearly. Precisely for this reason, some industry leaders have subjected Microsoft to legal action that only with time will be able to clarify our ideas more.
Summing up what we don’t like about Copilot
- It’s not really for junior devs. You have to “distrust” it many times, just as if you were using a text or autotranslation tool.
- Got to pay for the full version.
- It doesn’t work great with every language.
- It doesn’t test your code.
- It can be vulnerable and or buggy.
- There are many copyright issues to consider.
- Disruptive in a negative way.
Let’s dig deeper into code copyright and other issues
Some of the uncertainties created by GitHub Copilot regarding copyright and security are worthy of deeper analysis. AI Architect and AI ethics expert friend Mauro Bennici shares some of his thoughts on this:
The convenience of a custom code generator is easily verified with a few simple steps. It’s like doing a search on Google or Stack Overflow, only instead of links and comments, we get our very own ready-to-use code. Github’s CoPilot is one of the best-known tools, and, like any pioneer, it’s not without its early problems. Let’s take a look at some of them.
It’s not immune to misinterpreting the question asked, so it shouldn’t be used without fully understanding the answer provided (and this generally applies to any code we run into, if we want to be good). The answer could be incomplete or bring some known issues that are difficult to detect by a novice, such as finding a memory leak bug, following unsafe instructions for the requested operation, or storing unencrypted passwords.
CoPilot’s AI works, like many models, thanks to what is defined as the dictatorship of the majority and the biases that are present in the code used in order to train it. One of these flaws is easily noticeable in the latest generation of automatic correction tools, which learn from how users write. It’s not difficult to come across situations where a correct “its” is considered incorrect and suggests writing “it’s” instead, which is a nightmare grammatically speaking. It’s the same for “then'” instead of “than”. And so on. Most people just spell the word wrong or worse, they spell it right but take the automatic spell checker’s suggestion.
The famous feedback provided to the AI convinces it that its suggestion is correct and strengthens its “position”.
CoPilot can’t have any direct feedback. The problem is left to our IDE or compiler in case of serious error or to the expertise of the developer. However, the correct code won’t be part of CoPilot’s refinement dataset unless it’s released on GitHub in a public repository. And even if it was, we would have to wait for the time required for the release and hope that the majority of similar cases are composed of the correct code.
The training dataset is right at the center of a lawsuit against Microsoft, the parent company of GitHub. The accusation is that they stole the code of others for profit, without recognizing the intellectual property of the rightful owners. Let’s take a step back, the fact that code is released in open-source mode does not mean that it’s free. It means that we can see its source code, but we have to respect the different licenses under which it was released.
How can we be sure that the lessons learned by CoPilot and what CoPilot generates is actually far from the code of an (A)GPL project? That the license from which a hack takes place is not only for those who have an enterprise or educational license?
How can we be sure that the code we are bringing to our project does not violate any licenses or that we won’t be questioned one day? We remind you that in the US it’s also possible to patent code. And in Europe, the copyright is still guaranteed. Also, Microsoft is actually selling CoPilot, so you can’t invoke the non-profit or scientific research extenuating circumstances.
As a user, I can only appreciate the speed at which routine code or functions used only a couple of times a year are ready at my command and in just seconds. Behind the scenes, I expect it to work exactly with a Google Translator between requests and desired languages, but without the complexity of semantic or idiomatic meaning. This limit is immediately noticeable when the request’s complexity increases. The generated code is jumbled or long winded or tangled, as a PRO of the language would never write, both for their deep knowledge of the language, the basics of algorithms and data structures, and the basics of software architecture.
Certainly, smart assistants are here to stay and enhance our everyday lives, leaving us free to focus on the business at hand. We’ll see if we have to add a section in our licenses to authorize or deny the use of CoPilot, or if Microsoft will be forced to use only its own source code (and not the one to which it has access).
GitHub Copilot: draw your own conclusions
We always try to offer our community the best insights, but in the end, your personal developer experience is what counts. GitHub Copilot offers a free trial mode that we recommend you try out. Maybe you will find it useful, maybe not, but we hope this article will help you understand more thoroughly the different aspects of AI tools.
In conclusion, despite being an excellent tool, its task will not be to completely replace the programmer but to support him by suggesting a “cleaner” code.
The use of this tool is characterized by many positive aspects and, in part, some negative ones, this should not discourage you from trying it and drawing your own conclusions, especially since the first three months are free!
And remember, AI won’t replace you!
More about this topic: Is GitHub Copilot the solution to dev struggles?