Code it
4.6K views | +0 today
Follow
Code it
This is a curated resource for programmers and software architects. It is regularly updated with Articles, Hacks, How Tos, Examples and Code.
Curated by nrip
Your new post is loading...
Your new post is loading...
Scooped by nrip
Scoop.it!

Creating an effective bug report

Creating an effective bug report | Code it | Scoop.it
Providing clear steps to reproduce an issue is a benefit for everyone

 

When I’ve had to contact a company’s technical support through a form, I provide a ridiculously detailed description of what the issue is, when it happens, the specific troubleshooting that narrows it down, and (if it is allowed by the form) screenshots. I suspect the people on the other end aren’t used to that, because I know it is not what happens most of the time when people submit issues to me.

What’s the benefit of an effective bug report?

There are two sides to this, one is the efficiency of the interaction between the bug reporter and the (hopeful) bug fixer and the other is the actual likelihood that the issue will ever be fixed.

 

Why do we need “repro steps”?

The first part of fixing a problem is to make sure you understand it and can see it in action. If you can’t see the problem happen consistently, you can’t be sure you’ve fixed it. This is not a software development concept; it is true with nearly everything. If someone tells me the faucet is dripping, I’m going to want to see the drip happening before I try to fix it… then when I look and see it not dripping, I can feel confident that I resolved the problem.

 

Is it possible to provide too much information?

I’d rather have too much detail than too little, but I also feel that the bug reporter shouldn’t be expected to try to track down every possible variable or to try to fix the problem.

 

Human Nature and Incomplete Issues

I mentioned two sides to this, the efficiency of interaction and the likelihood an issue will be fixed. If you assume someone will ask for all the missing info, even if it takes a lot longer, then the only benefit to a complete bug report is efficiency. In reality though, developers are human, and when they see an incomplete issue where it could take many back-and-forth conversations to get the details they need, they will avoid it in favor of one that they can pick up and work on right now with no delay. Your issue, which could be more important than anything else in the queue, could end up being ignored because it is unclear.

 

A detailed bug report, with clear repro steps, is a win for everyone.

 

read this entire unedited super post at https://www.duncanmackenzie.net/blog/creating-an-effective-bug-report/

 

 

nrip's insight:
  • A detailed bug report, with clear repro steps, is a win for everyone.
  • Too many details is not bad.
  • Getting something not working as expected fixed for good is worth the extra effort
No comment yet.
Scooped by nrip
Scoop.it!

Humans are leading AI systems astray because we can't agree on labeling

Humans are leading AI systems astray because we can't agree on labeling | Code it | Scoop.it

Is it a bird? Is it a plane? Asking for a friend's machine-learning code

 

Top datasets used to train AI models and benchmark how the technology has progressed over time are riddled with labeling errors, a study shows.

 

Data is a vital resource in teaching machines how to complete specific tasks, whether that's identifying different species of plants or automatically generating captions. Most neural networks are spoon-fed lots and lots of annotated samples before they can learn common patterns in data.

 

But these labels aren’t always correct; training machines using error-prone datasets can decrease their performance or accuracy. In the aforementioned study, led by MIT, analysts combed through ten popular datasets that have been cited more than 100,000 times in academic papers and found that on average 3.4 per cent of the samples are wrongly labelled.

 

The datasets they looked at range from photographs in ImageNet, to sounds in AudioSet, reviews scraped from Amazon, to sketches in QuickDraw.

 

Examples of some of the mistakes compiled by the researchers show that in some cases, it’s a clear blunder, such as a drawing of a light bulb tagged as a crocodile, in others, however, it’s not always obvious. Should a picture of a bucket of baseballs be labeled as ‘baseballs’ or ‘bucket’?

 

What would happen if a self-driving car is trained on a dataset with frequent label errors that mislabel a three-way intersection as a four-way intersection?

 

read more at https://www.theregister.com/2021/04/01/mit_ai_accuracy/

 

 

No comment yet.