Hacker Newsnew | past | comments | ask | show | jobs | submit | wpollock's commentslogin

It's been a few years, but for Java I used OWASP: <https://owasp.org/www-project-dependency-check/>, which downloads the NVD (so first run was slow) and scans all dependicies against that. I ran it from maven as part of the build.

Here's a handy shell script to show only the options from a command's man page:

   showoptions() {
      man -s 1 "$*" |col -bx |awk '/^[ ]*-/,/^$/' |less
   }
(Won't work for every man page, but most of them where options start with a dash.)

Enjoy!


An interesting approach, Good luck with it! A nit to pick: find is not a bash command. You can run it for example from a Windows DOS command line as:

wsl find ...

You can run all Linux commands this way. Also, pretty sure that find's "-o" is the Boolean "or", not "otherwise". (Yet another example of why learning from LLMs is dangerous, I suppose).


otherwise is actually an accurate description of what -o actually means. It means do the thing on the left if it works, otherwise do the thing on the right. That is, if the clause on the left succeeds then the clause on the right will be ignored.

A naive interpretation of or in the light of Boolean algebra would be: do both and return true if either succeeds.


Haha, thanks for the further education!

The first section in the introduction describes a YAML file. Bait and switch?

There's always TECO! <Joking>

> Steel-manning the idea, perhaps they would ship object files (.o/.a) and the apt-get equivalent would link the system? I believe this arrangement was common in the days before dynamic linking. You don't have to redownload everything, but you do have to relink everything.

This was indeed comon for Unix. The only way to tune the systems (or even change the timezone) was to edit the very few source files and run make, which compiled those files then linked them into a new binary.

Linking-only is (or was) much faster than recompiling.


> Ban publication of any research that hasn't been reproduced.

Unless it is published, nobody will know about it and thus nobody will try to reproduce it.


Just have a new journal of only papers that have been reproduced, and include the reproduction papers.


One purpose of QA testing is compliance assurances, including with applicable policies, industry regulations, and laws. While devs are (usually) good at functioal testing, QA (usually) does non-functional testing better. I have not known any devs that test for GDPR compliance for example. (I am certain many devs do test for that, just stating my personal experience.)


Some points:

LLM oral exams can provide assessment in a student's native language. This can be very important in some scenarios!

Unlimited attempts won't work in the presented model. No matter how many cases you have, all will eventually find their way to the various cheating sites.

There is no silver bullet. There's no solution that works for all schools. Strategies that work well for M.I.T. with competitive enrollment and large budgets won't work for a small community college in an agricultural state, with large teaching loads per professor, no TAs, and about 15-25 hours of committee or other non-teaching work. That was my situation.

Teaching five courses and eight sections, 20-30 students per section, 10-20 office hours every week (and often more if the professor cared about the students), leaves little time for grading. In desperation I turned to weekly homework assignments, 4-6 programming projects, and multiple choice exams (containing code and questions about it). Not ideal by any means, just the best I could do.

So I smile now (I'm retired) when I hear about professors with several TAs each, explaining how they do assessment of 36 students at a school with competitive enrollment.


> Someone changes code to check if the ResultSet is empty before further processing and a large number of your mock based tests break as the original test author will only have mocked enough of the class to support the current implementation.

So this change doesn't allow an empty result set, something that is no longer allowed by the new implementation but was allowed previously. Isn't that the sort of breaking change you want your regression tests to catch?


I used ResultSet because the comment above mentioned it. A clearer example of what I’m talking about might be say you replace “x.size() > 0” with “!x.isEmpty()” when x is a mocked instance of class X.

If tests (authored by someone else) break, I now have to figure out whether the breakage is due to the fact that not enough behavior was mocked or whether I have inadvertently broken something. Maybe it’s actually important that code avoid using “isEmpty”? Or do I just mock the isEmpty call and hope for the best? What if the existing mocked behavior for size() is non-trivial?

Typically you’re not dealing with something as obvious.


What is the alternative? If you write a complete implementation of an interface for test purposes, can you actually be certain that your version of x.isEmpty() behaves as the actual method? If it has not been used before, can you trust that a green test is valid without manually checking it?

When I use mocking, I try to always use real objects as return values. So if I mock a repository method, like userRepository.search(...) I would return an actual list and not a mocked object. This has worked well for me. If I actually need to test the db query itself, I use a real db


The alternative to what? Using mocks?

For example, one alternative is to let my IDE implement the interface (I don’t have to “write” a complete implementation), where the default implementations throw “not yet implemented” type exceptions - which clearly indicate that the omitted behavior is not a deliberate part of the test.

Any “mocked” behavior involves writing normal debuggable idiomatic Java code - no need to learn or use a weird DSL to express the behavior of a method body. And it’s far easier to diagnose what’s going on or expected while running the test - instead of the backwards mock approach where failures are typically reported in a non-local manner (test completes and you get unexpected invocation or missing invocation error - where or what should have made the invocation?).

My test implementation can evolve naturally - it’s all normal debuggable idiomatic Java.


It doesn't have to be a breaking change -- an empty result set could still be allowed. It could simply be a perf improvement that avoids calling an expensive function with an empty result set, when it is known that the function is a no-op in this case.


If it's not a breaking change, why would a unit test fail as a result, whether or not using mocks/fakes for the code not under test? Unit tests should test the contract of a unit of code. Testing implementation details is better handled with assertions, right?

If the code being mocked changes its invariants the code under test that depends on that needs to be carefully re-examined. A failing unit test will alert one to that situation.

(I'm not being snarky, I don't understand your point and I want to.)


The problem occurs when the mock is incomplete. Suppose:

1. Initially codeUnderTest() calls a dependency's dep.getFoos() method, which returns a list of Foos. This method is expensive, even if there are no Foos to return.

2. Calling the real dep.getFoos() is awkward, so we mock it for tests.

3. Someone changes codeUnderTest() to first call dep.getNumberOfFoos(), which is always quick, and subsequently call dep.getFoos() only if the first method's return value is nonzero. This speeds up the common case in which there are no Foos to process.

4. The test breaks because dep.getNumberOfFoos() has not been mocked.

You could argue that the original test creator should have defensively also mocked dep.getNumberOfFoos() -- but this quickly becomes an argument that the complete functionality of dep should be mocked.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: