text
stringlengths
1
2.12k
source
dict
beginner, sorting, rust There is one more trick we can use. The standard way to avoid bounds checks is to add iterators. But you clearly can't have two iterators here; can you? You need to mutate in two different places that are on the move. That's not easy in Rust. You need two mutable references simultaneously live that point into the same array, without proving they point to two different values, which Rust forbids. There are safe tools to overcome the restrictions of the borrow checker, though, so that you may have two live mutating iterators. Cell is “a mutable memory location” reserved for copyable data (Plain Old Data) only. Since you are restricted to only mutate copyable data, such as your i32, you cannot mess up ownership of complex data. Docs describe copyable data as "types whose values can be duplicated simply by copying bits." Copyable data is Plain Old Data such as numbers, tuples of copyable data, immutable borrow (&), pointers (*const / *mut), structs and enums that implement Copy (where all fields are necessarily Copyable). Cell provides us with a getter and a setter. It's simple to understand why it's safe - what's the harm in mutating the same number in two places? Where's the catch though? There is one downside to Cell - it is not Sync, so that you can cannot have data races. // Sorts a vector, so that even numbers appear first. fn sort_array_by_parity(nums: Vec<i32>) -> Vec<i32> { let mut res: Vec<_> = nums.into_iter().map(|n| Cell::new(n)).collect(); let mut even_cursor = res.iter(); for n in &res { if n.get() % 2 == 0 { let dest = even_cursor.next().unwrap(); let temp = n.get(); n.set(dest.get()); dest.set(temp); } } res.into_iter().map(|n| n.get()).collect() }
{ "domain": "codereview.stackexchange", "id": 43302, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, sorting, rust", "url": null }
beginner, sorting, rust I benchmarked these functions for you. Clearly the optimizer is smart enough to optimize out the clone-alike into_iter-map-collect, even though it's not smart enough to optimize out the bounds checks. Probably the loop logic is too complex for the optimizer. In benchmarks, itertools fares the best, but if you're looking for preserving your original order of even elements, then cell should be your choice. And cell is safe and stable Rust! parity-orig time: [27.428 ns 27.453 ns 27.484 ns] Found 6 outliers among 100 measurements (6.00%) 2 (2.00%) low mild 1 (1.00%) high mild 3 (3.00%) high severe parity-unsafe time: [26.655 ns 26.686 ns 26.719 ns] Found 7 outliers among 100 measurements (7.00%) 1 (1.00%) low mild 4 (4.00%) high mild 2 (2.00%) high severe parity-partition time: [13.293 ns 13.351 ns 13.410 ns] Found 4 outliers among 100 measurements (4.00%) 3 (3.00%) high mild 1 (1.00%) high severe parity-cell time: [16.214 ns 16.227 ns 16.244 ns] Found 8 outliers among 100 measurements (8.00%) 2 (2.00%) high mild 6 (6.00%) high severe
{ "domain": "codereview.stackexchange", "id": 43302, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, sorting, rust", "url": null }
c++, unix, qt, signal-handling Title: Safely and elegantly handling UNIX signals in Qt applications Question: When you search the web on how to gracefully shut down your Qt application when receiving SIGTERM, you are invariably directed towards this informative, yet needlessly complicated, page full of code snippets: Calling Qt Functions From Unix Signal Handlers. So I gave it a shot and wrote a class which simply emits a normal Qt signal unixSignal(int signalNumber) after you call installSignalHandler(int signalNumber) once for each signal you are interested in, without the need for duplicating code for each desired signal. Now my question is: Did I miss anything important to safely tread around the dangerous territory of signal handlers? main.cpp #include <QApplication> #include "unixsignalnotifier.h" int main() { // Start of application initialization QApplication *application = new QApplication(argc, argv); // Make sure that the application terminates cleanly on various unix signals. QObject::connect(UnixSignalNotifier::instance(), SIGNAL(unixSignal(int)), application, SLOT(quit())); UnixSignalNotifier::instance()->installSignalHandler(SIGINT); UnixSignalNotifier::instance()->installSignalHandler(SIGTERM); return application->exec(); } unixsignalhandler.h #include <QObject> #include <QSocketNotifier> #include <signal.h> class UnixSignalNotifier : public QObject { Q_OBJECT public: static UnixSignalNotifier *instance(); bool installSignalHandler(int signalNumber); signals: void unixSignal(int signalNumber); private slots: void _socketHandler(int pipeFd); private: explicit UnixSignalNotifier(QObject *parent = 0); ~UnixSignalNotifier(); static void _signalHandler(int signalNumber); static int readPipes[_NSIG]; static int writePipes[_NSIG]; static QSocketNotifier *notifiers[_NSIG]; }; unixsignalhandler.cpp #include "unixsignalnotifier.h" #include <unistd.h> #include <sys/socket.h>
{ "domain": "codereview.stackexchange", "id": 43303, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, unix, qt, signal-handling", "url": null }
c++, unix, qt, signal-handling unixsignalhandler.cpp #include "unixsignalnotifier.h" #include <unistd.h> #include <sys/socket.h> int UnixSignalNotifier::readPipes[_NSIG] = {}; int UnixSignalNotifier::writePipes[_NSIG]; QSocketNotifier *UnixSignalNotifier::notifiers[_NSIG]; UnixSignalNotifier::UnixSignalNotifier(QObject *parent) : QObject(parent) { } UnixSignalNotifier::~UnixSignalNotifier() { for (int i = 0; i < _NSIG; i++) { if (notifiers[i] != NULL) { delete notifiers[i]; notifiers[i] = NULL; close(readPipes[i]); close(writePipes[i]); readPipes[i] = writePipes[i] = 0; } } } UnixSignalNotifier *UnixSignalNotifier::instance() { static UnixSignalNotifier *inst = new UnixSignalNotifier(); return inst; } bool UnixSignalNotifier::installSignalHandler(int signalNumber) { Q_ASSERT(1 <= signalNumber && signalNumber < _NSIG); Q_ASSERT(readPipes[signalNumber] == 0); Q_ASSERT(writePipes[signalNumber] == 0); Q_ASSERT(notifiers[signalNumber] == NULL); struct sigaction sigact; sigact.sa_handler = UnixSignalNotifier::_signalHandler; sigemptyset(&sigact.sa_mask); sigact.sa_flags = 0; sigact.sa_flags |= SA_RESTART; if (sigaction(signalNumber, &sigact, 0)) { qFatal("%s: Couldn't register signal handler", Q_FUNC_INFO); } int sockets[2]; if (::socketpair(AF_UNIX, SOCK_STREAM, 0, sockets)) { qFatal("%s: Couldn't create socketpair", Q_FUNC_INFO); } writePipes[signalNumber] = sockets[0]; readPipes[signalNumber] = sockets[1]; notifiers[signalNumber] = new QSocketNotifier(readPipes[signalNumber], QSocketNotifier::Read, 0); connect(notifiers[signalNumber], SIGNAL(activated(int)), this, SLOT(_socketHandler(int))); return true; }
{ "domain": "codereview.stackexchange", "id": 43303, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, unix, qt, signal-handling", "url": null }
c++, unix, qt, signal-handling return true; } void UnixSignalNotifier::_socketHandler(int pipeFd) { int signalNumber = -1; for (int i = 1; i < _NSIG; i++) { if (readPipes[i] == pipeFd) signalNumber = i; } if (signalNumber >= _NSIG) { qWarning("%s: Unable to find signal number for socket fd %d", Q_FUNC_INFO, pipeFd); return; } notifiers[signalNumber]->setEnabled(false); char dummy; ::read(readPipes[signalNumber], &dummy, sizeof(dummy)); emit unixSignal(signalNumber); notifiers[signalNumber]->setEnabled(true); } void UnixSignalNotifier::_signalHandler(int signalNumber) { if (writePipes[signalNumber] != 0) { char dummy = 1; ::write(writePipes[signalNumber], &dummy, sizeof(dummy)); } } Answer: I did something similar a while ago. Instead of one singleton that handles all signals, I found it easier (for the caller) to have one object per signal: #include <QObject> #include <QPointer> #include <array> // A UnixSignalHandler catches a particular Unix signal (e.g. SIGTERM) and emits // a Qt signal which can be connected to a slot. Note that a process cannot // catch SIGKILL - a handler for SIGKILL will never emit. class UnixSignalHandler: public QObject { Q_OBJECT public: UnixSignalHandler(int signal, QObject *parent = nullptr); static const int max_signal = 32; signals: // This gives no indication of which signal has been caught; you may achieve // that by connecting to a QSignalMapper if required. void raised() const; private slots: void consumeInput(int fd) const; private: int fd[2]; static std::array<QPointer<UnixSignalHandler>, max_signal> handler; static void handle(int signal); }; Then the implementation is: #include "unixsignalhandler.h" #include <QDebug> #include <QSocketNotifier> #include <sys/socket.h> #include <signal.h> #include <unistd.h>
{ "domain": "codereview.stackexchange", "id": 43303, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, unix, qt, signal-handling", "url": null }
c++, unix, qt, signal-handling #include <sys/socket.h> #include <signal.h> #include <unistd.h> UnixSignalHandler::UnixSignalHandler(int signal, QObject *parent) : QObject(parent) { if (handler[signal] != nullptr) { qCritical() << "ignoring request to register duplicate handler for signal" << signal; return; } if (::socketpair(AF_UNIX, SOCK_STREAM, 0, fd)) { qCritical() << "failed to create socket pair for" << signal << "-" << strerror(errno); return; } // There's not very much that a signal handler can legally do. One thing // that is permitted is to write to an open file descriptor. When our // handler is called, we'll write a single byte to a socket, and this socket // notifier will then learn of the signal outside of the signal handler // context. auto notifier = new QSocketNotifier(fd[1], QSocketNotifier::Read, this); connect(notifier, &QSocketNotifier::activated, this, &UnixSignalHandler::consumeInput); struct sigaction action; action.sa_handler = &UnixSignalHandler::handle; sigemptyset(&action.sa_mask); action.sa_flags = SA_RESTART; if (::sigaction(signal, &action, 0)) { qCritical() << "failed to add sigaction for" << signal << "-" << strerror(errno); return; } handler[signal] = this; } // This slot is connected to our socket notifier. It reads the byte that the // signal handler wrote (to reset the notifier) and emits a Qt signal. void UnixSignalHandler::consumeInput(int fd) const { char c; if (::read(fd, &c, sizeof c) <= 0) qWarning() << "Error reading fd" << fd << "(ignored) -" << strerror(errno); emit raised(); }
{ "domain": "codereview.stackexchange", "id": 43303, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, unix, qt, signal-handling", "url": null }
c++, unix, qt, signal-handling // This static method is the signal handler called when the process receives a // Unix signal. It writes a single byte to our open file descriptor. void UnixSignalHandler::handle(int signal) { if (signal < 0 || static_cast<size_t>(signal) >= handler.size()) { qWarning() << "ignored out-of-range signal" << signal; return; } auto const h = handler[signal]; if (!h) { qWarning() << "ignored unhandled signal" << signal; return; } char c = 0; if (::write(h->fd[0], &c, sizeof c) <= 0) qWarning() << "Error writing signal" << signal << "(ignored) -" << strerror(errno); } std::array<QPointer<UnixSignalHandler>, UnixSignalHandler::max_signal> UnixSignalHandler::handler; The cost to this approach is that it consumes a file-descriptor pair for each signal we want to handle, rather than multiplexing all signals over a single channel. Usage is slightly simpler than yours, I think (but there's not much in it). When I construct the 'main' class of my application: connect(new UnixSignalHandler(SIGTERM, this), &UnixSignalHandler::raised, qApp, &QCoreApplication::quit); connect(new UnixSignalHandler(SIGINT, this), &UnixSignalHandler::raised, qApp, &QCoreApplication::quit); I could probably just make qApp be the parent, and create/connect the handlers early in main(). Other differences Apart from the one/many signals per handler, my code differs from yours in the following:
{ "domain": "codereview.stackexchange", "id": 43303, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, unix, qt, signal-handling", "url": null }
c++, unix, qt, signal-handling I missed the destructor, meaning an fd leak (not a problem, as my handlers live the length of the program), but if I implemented one, it would be shorter - just the two close calls (QPointer is automatically reset to nullptr when the object is deleted, so the handler array looks after itself). I always avoid qFatal() and Q_ASSERT (in my applications, it's better to manage without the handler than to exit). I'm playing fast and loose with qWarning() inside the signal handler (but at that point, we're in trouble anyway, if we've lost the reader). I did find it helpful to show the error indication in the warning messages when errno has been set. You've found a value _NSIG for the range of signals you support. I couldn't find it documented; is that just an artefact of your particular <signal.h>? I also wrote a QTest unit test: void raiseSignal_USR1() { UnixSignalHandler handler(SIGUSR1); QSignalSpy spy(&handler, SIGNAL(raised())); QVERIFY(spy.isValid()); QVERIFY(spy.isEmpty()); ::raise(SIGUSR1); QVERIFY(spy.wait()); } I think both our implementations could be improved by taking the best features from the other. Further observations QSocketNotifier can be forward-declared in the header; it needs to be a complete type only in the implementation. My implementation doesn't need to bring <signal.h> into the header, but I think you need it for the _NSIG constant. Re-reading, I see that you also use a file-per-signal (but not a QObject per signal). We can both save on fds by multiplexing the signal number as the data sent across the pipe.
{ "domain": "codereview.stackexchange", "id": 43303, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, unix, qt, signal-handling", "url": null }
c#, language-design, lexical-analysis Title: C# Language Lexer Question: Here is a Lexer for a programming language I am working on. Any feedback would be appreciated. I only started learning C# a couple of days ago, so please excuse my newbie code :) namespace Sen { enum TokenType { IDENTIFIER, NUMBER, STRING, SEMICOLON, PLUS, MINUS, STAR, SLASH, } class Token { public TokenType type; public string value; public Token(TokenType type, string value = "") { this.type = type; this.value = value; } } class Lexer { public readonly List<Token> tokens; private int charIdx; private readonly string sourceRaw; char CurrentChar { get { return sourceRaw[charIdx]; } } public Lexer(string sourceRaw) { this.sourceRaw = sourceRaw; tokens = new List<Token>(); } bool IsEnd { get { return charIdx >= sourceRaw.Length; } } char? NextChar() { try { return sourceRaw[charIdx++]; } catch (IndexOutOfRangeException) { return null; } }
{ "domain": "codereview.stackexchange", "id": 43304, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, language-design, lexical-analysis", "url": null }
c#, language-design, lexical-analysis public void Lex() { while (!IsEnd) { switch (CurrentChar) { case ';': AddToken(TokenType.SEMICOLON); break; case ' ': break; case '\'': case '"': LexString(); break; case '+': AddToken(TokenType.PLUS); break; case '-': AddToken(TokenType.MINUS); break; case '*': AddToken(TokenType.STAR); break; case '/': AddToken(TokenType.SLASH); break; default: if (char.IsLetter(CurrentChar)) { LexIdentifier(); continue; } else if (char.IsNumber(CurrentChar)) { LexNumber(); continue; } throw new UnexpectedCharacterException(CurrentChar); } NextChar(); } } void AddToken(TokenType type, string value = "") { tokens.Add(new Token(type, value)); } void LexIdentifier() { int startIdx = charIdx; int endIdx = startIdx; while (!IsEnd && CurrentChar != ' ' && CurrentChar != ';') { if (!char.IsLetterOrDigit(CurrentChar) && CurrentChar != '_') throw new UnexpectedCharacterException(CurrentChar); NextChar(); endIdx++; } string value = sourceRaw[startIdx..endIdx]; AddToken(TokenType.IDENTIFIER, value); } void LexNumber() { int startIdx = charIdx; int endIdx = startIdx;
{ "domain": "codereview.stackexchange", "id": 43304, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, language-design, lexical-analysis", "url": null }
c#, language-design, lexical-analysis while (!IsEnd && CurrentChar != ' ' && CurrentChar != ';' && char.IsNumber(CurrentChar)) { NextChar(); endIdx++; } string value = sourceRaw[startIdx..endIdx]; AddToken(TokenType.NUMBER, value); } void LexString() { char opening = CurrentChar; int startIdx = charIdx + 1; int endIdx = startIdx - 1; NextChar(); while (CurrentChar != opening) { if (IsEnd) throw new ExpectedCharacterException(opening); NextChar(); endIdx++; } string value = sourceRaw[startIdx..endIdx]; AddToken(TokenType.STRING, value); } } } Answer: Welcome to CR and to C#. First thing first, you should become familiar with C# Naming Conventions. A few that I choose to emphasize in regards to your post: In Token class, the fields type and value should become properties named Type and Value. In general, fields are private unless they are constant or static. If you wish to expose a field as public, then it should be a property instead. Also, properties and methods should be named with Pascal casing. Though not required, I personally prefer to decorate all properties, fields, and method with its access modifier, even if it is private. Granted, private is the default but I want to make sure that a beginner has given it thought and explicitly marked it so. Regarding braces, there are 2 areas for improvement. One, the current thinkng with C# is that the open and close braces occur on their own line. And two, one-liners are frowned upon and should encorporate braces. Taking that into consideration, this would be a rewrite of one method: private char? NextChar() { try { return sourceRaw[charIdx++]; } catch (IndexOutOfRangeException) { return null; } }
{ "domain": "codereview.stackexchange", "id": 43304, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, language-design, lexical-analysis", "url": null }
c#, language-design, lexical-analysis Except that entire method can use a less expensive if rather than a try-catch block. private char? NextChar() => (charIdx >= 0 && !IsEnd) ? sourceRaw[charIdx++] : null; Why both to catch an exception if all you is ignore it? Especially when there is simple code that can easily work around it. Back to braces, lines such as: if (IsEnd) throw new ExpectedCharacterException(opening); should be converted to: if (IsEnd) { throw new ExpectedCharacterException(opening); } There are a few properties or methods where you may consider using =>. Example: private bool IsEnd => charIdx >= sourceRaw.Length; You seem to use CurrentChar != ' ' && CurrentChar != ';' frequently. Apparently, these are delimiters between tokens and values. The DRY Principle (Don't Repeat Yourself) suggests this could become its own property: private bool IsDelimiter => CurrentChar == ' ' || CurrentChar == ';' Elsewhere in code you would replace CurrentChar != ' ' && CurrentChar != ';' with !IsDelimiter. The advantage here, besides readability, is that if you were ever to add a 3rd delimiter in the future, you would only have to change it in one spot.
{ "domain": "codereview.stackexchange", "id": 43304, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, language-design, lexical-analysis", "url": null }
portability, raku Title: AST node class generator Question: Given two hashes, my script generates two (poorly formatted) C# source files containing some classes that represent several AST nodes a programming language needs and an implementation of the Visitor pattern for each. While I care a lot about the formatting of my Raku code, the formatting of the C# output is of no particular concern—I let Rider clean it up for me. My program has one dependency aside from the standard library: version 0.0.6 of the Map::Ordered module (installable using zef install --/test "Map::Ordered:ver<0.0.6>:auth<zef:lizmat>"). It also assumes your terminal supports ANSI colors (and that you want to see them). By default, the script writes the files to the src directory relative to the current working directory, but you can specify a different directory using the -o/--out-dir option. When you run it, it looks like this: (I've also uploaded the output files to GitHub Gist, should you want to see them.) The code #!/usr/bin/env raku use Map::Ordered:ver<0.0.6>:auth<zef:lizmat>; unit sub MAIN(Str :o(:$out-dir) = 'src'); my %exprs is Map::Ordered = Binary => [left => 'Expr', operator => 'Token', right => 'Expr'], Grouping => [expression => 'Expr'], Literal => [value => 'object?'], Unary => [operator => 'Token', right => 'Expr']; my %stmts is Map::Ordered = ExpressionStatement => [expression => 'Expr'], Print => [expression => 'Expr']; generate :base-class('Expr'), :classes(%exprs), :$out-dir; generate :base-class('Stmt'), :classes(%stmts), :$out-dir; sub generate(:$base-class!, :%classes!, :$out-dir!) { my $source = ''; $source ~= qq:to/END/; namespace Lox; internal abstract class $base-class \{ END for %classes.kv -> $class-name, @fields { my @types = @fields.map: *.value; my @names = @fields.map: *.key; my @names-and-types = flat @names Z @types;
{ "domain": "codereview.stackexchange", "id": 43305, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "portability, raku", "url": null }
portability, raku my $fields = format(-> $type, $name { "internal $type {$name.tc} \{ get; \}" }, @names-and-types); my $parameters = format(-> $type, $name { "$type {rename-reserved-word($name)}" }, @names-and-types, ', '); my $initializers = format(-> $name { "{$name.tc} = {rename-reserved-word($name)};" }, @names); $source ~= qq:to/END/; internal class $class-name : $base-class \{ $fields internal {$class-name}($parameters) \{ $initializers } internal override T Accept<T>(IVisitor<T> visitor) => visitor.Visit(this); } END } $source ~= qq:to/END/; internal interface IVisitor<T> \{ {format({ "public T Visit($^type expr);" }, %classes.keys)} } internal abstract T Accept<T>(IVisitor<T> visitor); } END my $path = IO::Spec::Unix.catpath($, $out-dir, "$base-class.cs"); spurt $path, $source; say "\e[1;32m\c[CHECK MARK]\e[0m Wrote \e[36m{$base-class}\e[0m classes to \e[1;4m$path\e[0m"; } sub rename-reserved-word($identifier) { $identifier eq 'operator' ?? '@operator' !! $identifier } multi sub format(&fn where *.signature.params == 1, @xs, $sep = "\n") { @xs.map(&fn).join($sep) } multi sub format(&fn where *.signature.params == 2, @xs, $sep = "\n") { @xs.map({ fn($^b, $^a) }).join($sep) } The only line I'm really not sure about is this one: my $path = IO::Spec::Unix.catpath($, $out-dir, "$base-class.cs"); It feels strange to have to use a platform-specific function right there in the middle of a script that is otherwise pretty platform-agnostic, but I couldn't find a function in the standard library that does the right thing across all platforms. In a review, I'd like for that to be addressed, as well as the usual stuff. Answer: IO::Path my $path = IO::Spec::Unix.catpath($, $out-dir, "$base-class.cs");
{ "domain": "codereview.stackexchange", "id": 43305, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "portability, raku", "url": null }
portability, raku Answer: IO::Path my $path = IO::Spec::Unix.catpath($, $out-dir, "$base-class.cs"); You need to work with IO::Path type. You can do $out-dir.IO.add: "$base-class.cs", but I recommend doing the IO part in MAINs signature. Also your code doesn't check if the $out-dir exists. So I would make these changes: unit sub MAIN(IO::Path(Str) :o(:$out-dir) = 'src'); $out-dir.mkdir: 0o755 unless $out-dir.d; my $path = $out-dir.add: "$base-class.cs"; format function There is no need to use where clauses in your signature, you can specify the singature of &fn. You can also drop the sub and &fn parameters: multi format(&fn:($), @xs, $sep = "\n") { @xs.map(&fn).join($sep) } multi format(&fn:($, $), @xs, $sep = "\n") { @xs.map(&fn).join($sep) } If you do that, then there's no need for a multi: sub format(&fn:($, $?), @xs, $sep = "\n") { @xs.map(&fn).join($sep) } If you don't know I'll mention some things that can be personal preferences, but maybe it can be of value if you don't know them already. You can enable/disable things in quoting constructs, so you can disable closures if you are not going to use them, so you won't have to escape braces: $source ~= qq:!c:to/END/; namespace Lox; internal abstract class $base-class { END You can use single quotes and temporary use interpolation in it: $source ~= q:to/END/; internal interface IVisitor<T> { \qq「{format({ "public T Visit($^type expr);" }, %classes.keys)}」 } internal abstract T Accept<T>(IVisitor<T> visitor); } END You can call methods in strings: qq:!c「internal $type $name.tc() { get; }」 Same for functions: "$type &rename-reserved-word($name)" Update: There is no need to declare/initialize $source separately: my $source = qq:to/END/; namespace Lox; internal abstract class $base-class \{ END
{ "domain": "codereview.stackexchange", "id": 43305, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "portability, raku", "url": null }
portability, raku internal abstract class $base-class \{ END Alternative solutions With your format function you're doing extra iterations, this can be a design decision, but you can do everything in one loop. for %classes.kv -> $class-name, @names-and-types { my (@fields, @parameters, @initializers); for @names-and-types -> (:key($name), :value($type)) { @fields.append: qq:!c「internal $type $name.tc() { get; }」; @parameters.append: "$type &rename-reserved-word($name)"; @initializers.append: "$name.tc() = &rename-reserved-word($name);"; } $source ~= qq:to/END/; internal class $class-name : $base-class \{ @fields.join("\n") internal {$class-name}(@parameters.join(', ')) \{ @initializers.join("\n") } internal override T Accept<T>(IVisitor<T> visitor) => visitor.Visit(this); } END } You can eliminate the Map::Ordered dependecy by using Arrays/Lists the way you did for inner lists and put your base classes in one Map: my %base-class is Map = Expr => [Binary => [left => 'Expr', operator => 'Token', right => 'Expr'], Grouping => [expression => 'Expr'], Literal => [value => 'object?'], Unary => [operator => 'Token', right => 'Expr']], Stmt => [ExpressionStatement => [expression => 'Expr'], Print => [expression => 'Expr']]; for %base-class.kv -> $base-class, @classes { generate :$base-class, :@classes, :$out-dir; } Then you can also eliminate the format call for IVisitor and remove the format function: my @visits; for @classes -> (:key($class-name), :value(@names-and-types)) { my (@fields, @parameters, @initializers); for @names-and-types -> (:key($name), :value($type)) { @fields.append: qq:!c「internal $type $name.tc() { get; }」; @parameters.append: "$type &rename-reserved-word($name)"; @initializers.append: "$name.tc() = &rename-reserved-word($name);"; }
{ "domain": "codereview.stackexchange", "id": 43305, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "portability, raku", "url": null }
portability, raku $source ~= qq:to/END/; internal class $class-name : $base-class \{ @fields.join("\n") internal {$class-name}(@parameters.join(', ')) \{ @initializers.join("\n") } internal override T Accept<T>(IVisitor<T> visitor) => visitor.Visit(this); } END @visits.append: "public T Visit($class-name expr);" } $source ~= qq:to/END/; internal interface IVisitor<T> \{ @visits.join("\n") } internal abstract T Accept<T>(IVisitor<T> visitor); } END
{ "domain": "codereview.stackexchange", "id": 43305, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "portability, raku", "url": null }
c++, error-handling, timer, gui, winapi Title: Program to check if a window is opened or not during some time interval Question: I'm trying to write a program that will check if a window is opened or not, during a given time. I was able to do that, but now I'm trying to make my code look cleaner. I don't like how I implement the way I handle all possible error cases. Here is my current code: int APIENTRY wWinMain(_In_ HINSTANCE hInstance, _In_opt_ HINSTANCE hPrevInstance, _In_ LPWSTR lpCmdLine, _In_ int nCmdShow) { int nArgs = 0; LPWSTR* Name = CommandLineToArgvW((LPCWSTR)lpCmdLine, &nArgs); HWND hwnd = NULL; if (Name[1] == NULL) { MessageBox(NULL, L"Please enter a time", NULL, MB_ICONERROR | MB_OK); //No timer input } else { std::wstring wsTimer = Name[1]; std::string sTimer(wsTimer.begin(), wsTimer.end()); string::size_type tTimer = sTimer.find_first_not_of("0123456789"); if (tTimer != std::string::npos) { MessageBox(NULL, L"Invalid timer input", NULL, MB_ICONERROR | MB_OK); //Letters present in timer input } else { int Timer = std::stoi(sTimer); if (Timer == 0) { MessageBox(NULL, L"Invalid timer input", NULL, MB_ICONERROR | MB_OK); //Input 0 as timer } else { int i = 0; while (i++ < Timer) { hwnd = FindWindowW(NULL, *Name); if (hwnd == NULL) { Sleep(1000); } else { return 0; } } MessageBox(NULL, L"Window is not found", NULL, MB_ICONERROR | MB_OK); //Window not open, wrong name, not found } } } }
{ "domain": "codereview.stackexchange", "id": 43306, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, error-handling, timer, gui, winapi", "url": null }
c++, error-handling, timer, gui, winapi } } } Just way too many if/else blocks, are there suggestions on how I can improve it and make it cleaner? Answer: I would probably start by defining a function named require or something on that order: void message(wchar_t const *msg) { MessageBoxW(NULL, msg, NULL, MB_ICONERROR | MB_OK); } template <typename F> void require(F f, wchar_t const *errorString) { if (!f()) { message(errorString); exit(EXIT_FAILURE); } } Note: I've also separated the call to MessageBox out into its own function, both for the sake of portability and because in real use MessageBox is likely to be annoying for a command-line application, so you'll probably soon want to switch to something like printing to std::cerr. Given the widespread use and understanding of argc and argv, I'd (strongly) favor them over nArgs and Names. Using those, your code for main could start off something like this: int argc = 0; wchar_t** argv = CommandLineToArgvW(lpCmdLine, &argc); require([&]{ return argv[1] != nullptr; }, L"Please enter a time"); std::stoi can tell you how many characters it converted. I'd make use of that to simplify checking the input string a bit: std::wstring wsDuration(argv[1]); std::size_t count; int duration = std::stoi(argv[1], &count); require([&]{ return count == wsDuration.size(); }, L"Invalid Timer input"); require([&]{ return duration > 0; }, L"Invalid Timer Input"); As for the loop, your situation seems to fit well with a normal counted loop. I'd also generally prefer the standard library functions for sleeping. for (int i=0; i<duration; i++) { HWND hwnd = FindWindowW(NULL, argv[0]); if (hwnd != nullptr) return 0; std::this_thread::sleep_for(1s); } message(L"Window is not found"); Putting those together, we end up with something like this: #include <windows.h> #include <thread> #include <chrono> using namespace std::literals;
{ "domain": "codereview.stackexchange", "id": 43306, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, error-handling, timer, gui, winapi", "url": null }
c++, error-handling, timer, gui, winapi using namespace std::literals; void message(wchar_t const *msg) { MessageBoxW(NULL, msg, NULL, MB_ICONERROR | MB_OK); } template <typename F> void require(F f, wchar_t const *errorString) { if (!f()) { message(errorString); exit(EXIT_FAILURE); } } int APIENTRY wWinMain(_In_ HINSTANCE hInstance, _In_opt_ HINSTANCE hPrevInstance, _In_ LPWSTR lpCmdLine, _In_ int nCmdShow) { int argc = 0; wchar_t** argv = CommandLineToArgvW(lpCmdLine, &argc); require([&]{ return argv[1] != nullptr; }, L"Please enter a time"); std::wstring wsDuration(argv[1]); std::size_t count; int duration = std::stoi(argv[1], &count); require([&]{ return count == wsDuration.size(); }, L"Invalid Timer input"); require([&]{ return duration > 0; }, L"Invalid Timer Input"); for (int i=0; i<duration; i++) { HWND hwnd = FindWindowW(NULL, argv[0]); if (hwnd != nullptr) return 0; std::this_thread::sleep_for(1s); } message(L"Window is not found"); return 0; }
{ "domain": "codereview.stackexchange", "id": 43306, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, error-handling, timer, gui, winapi", "url": null }
python, file-system Title: Single-file stdlib-only python backup utility Question: I have written this backup utility to keep incremental backups by copying new and modified files, and hard linking unchanged or simply moved files. In an attempt to speed up the comparisons, I save a record of the file stats from the previous backup to avoid iterating over the old backup directory. The backup is called from the command line passing the destination folder, followed by the folder to be backed-up. Configuration options are taken from text files in the same folder as the destination of the backup. I have done some amount of testing for all my "#TODO's", but not enough yet to feel confident it's particularly robust (particularly not on other OS's than Windows 10). No backup pruning is performed or intended as of yet. A good place to start is by calling the help from the command line: >python backup_utility.py -h #!/usr/bin/env python3 # -*- coding: utf-8 -*- """ Created on Fri Mar 11 13:20:15 2022 @author: Aaron Thompson @license: CC BY 3.0 @license-url: https://creativecommons.org/licenses/by/3.0/ """ # main imports import argparse from collections.abc import Iterable from datetime import datetime from inspect import cleandoc import logging from logging.handlers import MemoryHandler, RotatingFileHandler import os from os import stat_result from pathlib import Path import pickle import re import shutil import stat import sys __version__ = "2022-05-03" # TODO test logging cases logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s') console_handler = logging.StreamHandler(sys.stdout) console_handler.setFormatter(formatter) console_handler.setLevel(logging.INFO) logger.addHandler(console_handler) # handler for buffering logging messages before log file is defined memory_handler = MemoryHandler(1e6) logger.addHandler(memory_handler)
{ "domain": "codereview.stackexchange", "id": 43307, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, file-system", "url": null }
python, file-system # DEFAULT OPTIONS options_template = cleandoc(r""" #Backup job options # #lines starting with "#" are ignored #lines of the form "key = value" are added to the options dictionary #backup folder naming convention based on python datetime formatting #https://docs.python.org/3/library/datetime.html format = {format} #skip backup if no files are changed? True, False skip = {skip} #follow symbolic links? symlinks = {symlinks} #file operation error behavior: [Ignore, Warn, Fail] errors = {errors} #log file location (leaving this empty disables logging to file) logfile = #log file verbosity: [DEBUG, INFO, WARNING, ERROR, CRITICAL] loglevel = INFO """) default_options = {"format": "%Y%m%d-%H%M%S", "skip": "True", "symlinks": "True", "errors": "Warn", "logfile": "", "loglevel": "INFO"} # DEFAULT FILTERS filter_default = cleandoc(r""" #Backup file/folder configuration: # blacklist file includes filters for files/folders to be skipped # whitelist file includes filters for files/folders which should # be included, overriding the blacklist. # # Blank lines and lines starting with "#" are skipped # One filter per line: exact file or folder matches # Lines starting with ^ are python style regex filters # # Example: filter a specific file # C:\Users\uname\Documents\temporary.txt # Example: filter an entire folder (and subfolders) # C:\Users\uname\AppData\ # Example: regex filter for selecting .log files from a project folder # ^C:\\Users\\uname\\project\\*\.log$ """) # TODO test robustness def get_config(dest: Path) -> tuple[dict[str, str], list[str], list[str]]: op = (dest / "BackupOptions.txt") wl = (dest / "Whitelist.txt") bl = (dest / "Blacklist.txt")
{ "domain": "codereview.stackexchange", "id": 43307, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, file-system", "url": null }
python, file-system if op.exists() and op.is_file(): logger.debug("reading config") with open(op) as f: options = list(f) options = [s.strip() for s in options] # strip whitespace options = [s for s in options if s and not s.startswith("#")] # strip empty and comments options = {line.split("=")[0].strip(): line.split("=")[1].strip() for line in options if '=' in line} for option in default_options: if option not in options: logger.warning(f"option:{option} missing from BackupOptions.txt: using default: {default_options[option]}") options[option] = default_options[option] # setup logger file handler options here and append buffered logs if options['logfile']: logger.debug("setting up rotating log file handler") # TODO make log file size and number of logs configurable? or default is good enough for anyone? file_handler = RotatingFileHandler(options['logfile'], maxBytes=2**20, backupCount=10) try: level = {"DEBUG": logging.DEBUG, "INFO": logging.INFO, "WARNING": logging.WARNING, "ERROR": logging.ERROR, "CRITICAL": logging.CRITICAL}[options["loglevel"]] except KeyError: logger.warning(f"{options['loglevel']} is not a valid 'loglevel': defaulting to INFO") level = logging.INFO file_handler.setLevel(level) file_handler.addFilter(lambda record: record.levelno >= level) file_handler.setFormatter(formatter) logger.debug("swapping out memory handler for file handler") logger.addHandler(file_handler) logger.removeHandler(memory_handler) memory_handler.setTarget(file_handler) memory_handler.flush()
{ "domain": "codereview.stackexchange", "id": 43307, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, file-system", "url": null }
python, file-system memory_handler.setTarget(file_handler) memory_handler.flush() logger.debug(f"config={options}") else: logger.info("creating default config file") with open(op, "w") as f: f.write(options_template.format(**default_options)) return get_config(dest) # recursing is easier so default config can just be `options_default` if wl.exists() and wl.is_file(): logger.debug("reading whitelist") with open(wl) as f: whitelist = list(f) whitelist = [s.strip() for s in whitelist] # strip whitespace whitelist = [s for s in whitelist if s and not s.startswith("#")] # strip empty and comments else: logger.info("creating default whitelist file") with open(wl, "w") as f: f.write(filter_default) whitelist = [] if bl.exists() and bl.is_file(): logger.debug("reading blacklist") with open(bl) as f: blacklist = list(f) blacklist = [s.strip() for s in blacklist] # strip whitespace blacklist = [s for s in blacklist if s and not s.startswith("#")] # strip empty and comments else: logger.info("creating default blacklist file") with open(bl, "w") as f: f.write(filter_default) blacklist = [] return options, whitelist, blacklist def match_filter(file: str, pattern: str, src: Path) -> bool: if pattern.startswith("$"): return bool(re.match(pattern, file)) file = Path(file) pattern = Path(pattern) if not pattern.is_absolute(): # assume relative to src pattern = src / pattern if pattern.exists(): if pattern.is_dir(): return file.is_relative_to(pattern) elif pattern.is_file(): return pattern.samefile(file) else: return False
{ "domain": "codereview.stackexchange", "id": 43307, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, file-system", "url": null }
python, file-system # TODO Test file filtering def filter_files(files: dict[str, stat_result], src: Path, blacklist: Iterable[str], whitelist: Iterable[str]) -> dict[str, stat_result]: names = set(files.keys()) filtered = {} for file in names: if not any(match_filter(file, pattern, src) for pattern in blacklist): filtered[file] = files[file] else: logger.debug(f"blacklisted: {file}") for file in names: if any(match_filter(file, pattern, src) for pattern in whitelist): filtered[file] = files[file] logger.debug(f"whitelisted: {file}") return filtered # TODO testing robustness def get_prior_backup(dest: Path, format: str) -> tuple[dict[str, stat_result], Path]: most_recent_dt = None most_recent_dir = None most_recent_stats = None dt = datetime(1970, 1, 1) for path in dest.iterdir(): # only look at folders of the correct name format if not path.is_dir(): continue # stats file must also exist stats_file = (path.parent / (path.name + ".stats")) if not stats_file.is_file(): continue try: dt = datetime.strptime(path.name, format) except ValueError: pass else: if not most_recent_dt: most_recent_dt = dt most_recent_stats = stats_file most_recent_dir = path else: if dt > most_recent_dt: most_recent_dt = dt most_recent_stats = stats_file most_recent_dir = path if most_recent_stats is not None: logger.debug(f"opening prior backup stats: {most_recent_stats}") with open(most_recent_stats, 'rb') as f: return pickle.load(f), most_recent_dir else: return {}, Path()
{ "domain": "codereview.stackexchange", "id": 43307, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, file-system", "url": null }
python, file-system def compare_stat_result(a: stat_result, b: stat_result) -> bool: # ignore things like access time and metadata change time return all([ a.st_ino == b.st_ino, a.st_dev == b.st_dev, a.st_mtime == b.st_mtime ]) # TODO testing accuracy and robustness (multiarch) def compare_stats(new: dict[str, stat_result], old: dict[str, stat_result]) -> tuple[bool, list[str], list[str], list[str]]: is_modified = False # is there any change at all from the old backup dirs = [] # create all (src) #dirs can't be linked so just copy all do_link = [] # (src, dst) #for unchanged and moved files do_copy = [] # (src) #dst is always same as src #for new and modified files # reverse mapping to find renamed (moved) files old_names_by_ino = {} for k, v in old.items(): if v.st_ino in old_names_by_ino: old_names_by_ino[v.st_ino].append(k) else: old_names_by_ino[v.st_ino] = [k] # walk the new items for k, v in new.items(): if stat.S_ISDIR(v.st_mode): dirs.append(k) elif v.st_ino in old_names_by_ino: # inode existed previously if compare_stat_result(old[old_names_by_ino[v.st_ino][0]], v): # stat unchanged (unmodified) if k in old_names_by_ino[v.st_ino]: # name unchanged do_link.append((k, k)) # (src, dst) else: # name changed (moved) do_link.append((old_names_by_ino[v.st_ino][0], k)) # (src, dst) is_modified = True else: # file modified (stat changed) do_copy.append(k) is_modified = True else: # inode did not previously exist (new file) do_copy.append(k) is_modified = True return (is_modified, dirs, do_link, do_copy) def do_backup(src: Path, dest: Path) -> None: logger.info("Starting backup")
{ "domain": "codereview.stackexchange", "id": 43307, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, file-system", "url": null }
python, file-system def do_backup(src: Path, dest: Path) -> None: logger.info("Starting backup") logger.debug("ensuring destination path exists") if not dest.is_dir(): logger.critical("destination path given is not a vaild directory") raise RuntimeError options, whitelist, blacklist = get_config(dest) follow_symlinks = options["symlinks"].lower() in ("true", "yes", "y") def handle_error(e: Exception) -> None: if options['errors'].lower() == "ignore": pass elif options['errors'].lower() == "warn": logger.exception(e, exc_info=True) elif options['errors'].lower() == "fail": logger.critical(e, exc_info=True) raise e logger.debug("walking source directory") # get target dir stats target_stats = {} # XXX better file stats scan that recursive glob? # qwery journal for file modifications? # options to throttle file operations to prevent system slowdown with disk usage? # os.walk is not faster. # os.scandir produces dict_result without needed stats, # requiring extra stat() call anyway. Not faster. for i in src.rglob('*'): try: if follow_symlinks: target_stats[str(i)] = i.stat() else: target_stats[str(i)] = i.lstat() except Exception as e: handle_error(e) logger.debug("filtering target files") # filter stats new_stats = filter_files(target_stats, src, blacklist, whitelist) # don't try to backup recursively # TODO test this for file in new_stats.keys(): if Path(file).is_relative_to(dest): raise Exception(f"Backed up files cannot contain backup destination\n\tsrc:{file}\n\tdst:{dest}") # convert absolute to relative path for processing new_stats = {str(Path(k).relative_to(src)): v for k, v in new_stats.items()}
{ "domain": "codereview.stackexchange", "id": 43307, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, file-system", "url": null }
python, file-system logger.debug("comparing source directory to old backups") # get old backup old_stats, old_backup = get_prior_backup(dest, options["format"]) # compare old - new is_modified, dirs, do_link, do_copy = compare_stats(new_stats, old_stats) # optionally skip this backup if options["skip"].lower() in ("true", "yes", "y") and not is_modified: logger.info("Skipping backup: directory is unchanged") return # did_backup=False # new folder this_backup = (dest / datetime.now().strftime(options['format'])) this_backup.mkdir(parents=True, exist_ok=False) logger.info(f"Creating new backup: {this_backup}") logger.debug("creating dir structure") # build the structure for d in dirs: (this_backup / d).mkdir(parents=True, exist_ok=True) # copy files for i in sorted(do_copy): # sorted() makes finding a specific file in debug output easier logger.debug(f"copying {i}") try: shutil.copy2(src / i, this_backup / i, follow_symlinks=follow_symlinks) except Exception as e: handle_error(e) del new_stats[i] # delete from stats to indicate file is not present in this backup for s, d in sorted(do_link): logger.debug(f"linking {d}") try: os.link(old_backup / s, this_backup / d, follow_symlinks=follow_symlinks) except Exception as e: handle_error(e) del new_stats[d] # delete from stats to indicate file is not present in this backup logger.debug("writing backup stats") with open(this_backup.parent / (this_backup.name + ".stats"), "wb") as f: pickle.dump(new_stats, f) logger.info("Backup complete") return # did_backup=True
{ "domain": "codereview.stackexchange", "id": 43307, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, file-system", "url": null }
python, file-system logger.info("Backup complete") return # did_backup=True def main(): parser = argparse.ArgumentParser(description=f"A single-file zero-dependency python backup utility. version: {__version__}") parser.add_argument('Destination', type=Path, help="Destination for backup files including backup config files") parser.add_argument('Source', nargs="?", type=Path, help="Path to directory which will be backed up. Omit this to generate default config files in the destination directory without performing a backup.") group = parser.add_mutually_exclusive_group() group.add_argument('-v', '--verbose', action="store_true", help="set console logging verbosity to DEBUG") group.add_argument('-q', '--quiet', action="store_true", help="set console logging verbosity to ERROR") args = parser.parse_args() if args.quiet: console_handler.setLevel(logging.ERROR) elif args.verbose: console_handler.setLevel(logging.DEBUG) logger.info("backup_utility.main") logger.debug(f"got args: {args}") if args.Source is None: logger.info("no backup source given: ensuring config files exist in destination directory.") get_config(args.Destination) else: do_backup(args.Source, args.Destination) if __name__ == "__main__": main()
{ "domain": "codereview.stackexchange", "id": 43307, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, file-system", "url": null }
python, file-system if __name__ == "__main__": main() Answer: The overall code reads well, however there are some issues. Missing module docstring Your module has a docstring, but it only conveys its author and license, not its purpose. It should contain something along the line of your question's title. Long functions The function get_config() is currently undocumented and quite long. Also the return type hint does not help anybody, without scrolling to the end of that mega function where the purpose of its return values can be inferred from the variable names. I suggest you use a config object, such as a NamedTuple to contain the relevant configuration and return that from get_config() -> Configuration. Also consider building the - politically correctly called - allow and deny lists in separate functions. The fact that you did not include those in the config object in the first place suggest, that they are not related to it anyway. Use return early IMO it makes for easier reading of the code. E.g. consider converting this: if pattern.exists(): if pattern.is_dir(): return file.is_relative_to(pattern) elif pattern.is_file(): return pattern.samefile(file) else: return False into this: if pattern.is_dir(): return file.is_relative_to(pattern) if pattern.is_file(): return pattern.samefile(file) return False Also note that your current implementation of above function may implicitly return None in the case that a file exists but is neither a directory nor a regular file (but e.g. a block device). That gap would be apparent when using the return-early pattern. Also note that the check for pattern.exists() is redundant, since it is implicitly done by is_file() and is_dir() respectively (see the docs).
{ "domain": "codereview.stackexchange", "id": 43307, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, file-system", "url": null }
rust, computational-geometry, graphics, opengl Title: Generate vertices and normals for a flat shaded cylinder Question: I would like to generate list of vertices and normals (with the correct indices) for rendering a cylinder barrel (I ommited the end caps for brevity). The normals should not be interpolated (flat shaded cylinder), so all four vertices that form a radial segment must have the same normal. Currently this is achieved by first generating a ring of vertices on the top and the bottom of the cylinder barrel and then another ring of vertices, shifted by one radial segment and with a shifted normal. This current function works fine, but I do not like that I have to calculate some values (u, theta, sin_theta, cos_theta) more than once and that I needed two loops for doing this. fn print_cylinder_vertices( radius_bottom: f32, radius_top: f32, height: f32, radial_segments: u32 ) { // The vertices and indices of the cylinder barrel. let mut verts = Vec::new(); let mut inds = Vec::new(); // Helper variables. let half_height = height / 2f32; // Calculate the slope so that the normals can be easily derived. let slope = (radius_bottom - radius_top) / height; for y in 0..=HEIGHT_SEGMENTS { let radius = y as f32 * (radius_bottom - radius_top) + radius_top; for x in 0..radial_segments { let u = x as f32 / radial_segments as f32; let u1 = (x as f32 + 0.5) / radial_segments as f32; let theta = u * THETA_END + THETA_START; let theta1 = u1 * THETA_END + THETA_START; let sin_theta = theta.sin(); let cos_theta = theta.cos(); let sin_theta1 = theta1.sin(); let cos_theta1 = theta1.cos();
{ "domain": "codereview.stackexchange", "id": 43308, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rust, computational-geometry, graphics, opengl", "url": null }
rust, computational-geometry, graphics, opengl let sin_theta1 = theta1.sin(); let cos_theta1 = theta1.cos(); verts.push(Vertex { position: [ radius * sin_theta, -(y as f32) * height + half_height, radius * cos_theta, ], normal: [sin_theta1, slope, cos_theta1], }); } for x in 1..(radial_segments + 1) { let u = x as f32 / radial_segments as f32; let u2 = (x as f32 - 0.5) / radial_segments as f32; let theta = u * THETA_END + THETA_START; let theta2 = u2 * THETA_END + THETA_START; let sin_theta = theta.sin(); let cos_theta = theta.cos(); let sin_theta2 = theta2.sin(); let cos_theta2 = theta2.cos(); verts.push(Vertex { position: [ radius * sin_theta, -(y as f32) * height + half_height, radius * cos_theta, ], normal: [sin_theta2, slope, cos_theta2], }); } } for i in 0..radial_segments { let a = i; let b = i + radial_segments; let c = i + radial_segments * 3; let d = i + radial_segments * 2; // The first triangle of the radial segment. inds.push(b); inds.push(a); inds.push(d); // The second triangle of the radial segment. inds.push(c); inds.push(b); inds.push(d); } println!("{:.1?}", verts); println!("{:?}", inds); } for the sake of completeness here are the constants and the struct used within the function above: const THETA_START: f32 = 0f32; const THETA_END: f32 = 2f32 * std::f32::consts::PI; const HEIGHT_SEGMENTS: u32 = 1; #[derive(Debug)] struct Vertex { #[allow(unused)] position: [f32; 3], #[allow(unused)] normal: [f32; 3], }
{ "domain": "codereview.stackexchange", "id": 43308, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rust, computational-geometry, graphics, opengl", "url": null }
rust, computational-geometry, graphics, opengl #[allow(unused)] normal: [f32; 3], } The desired output of the function for an input of print_cylinder_vertices(1f32, 1f32, 1f32, 3); would be: [Vertex { position: [0.0, 0.5, 1.0], normal: [0.9, 0.0, 0.5] }, Vertex { position: [0.9, 0.5, -0.5], normal: [-0.0, 0.0, -1.0] }, Vertex { position: [-0.9, 0.5, -0.5], normal: [-0.9, 0.0, 0.5] }, Vertex { position: [0.9, 0.5, -0.5], normal: [0.9, 0.0, 0.5] }, Vertex { position: [-0.9, 0.5, -0.5], normal: [-0.0, 0.0, -1.0] }, Vertex { position: [0.0, 0.5, 1.0], normal: [-0.9, 0.0, 0.5] }, Vertex { position: [0.0, -0.5, 1.0], normal: [0.9, 0.0, 0.5] }, Vertex { position: [0.9, -0.5, -0.5], normal: [-0.0, 0.0, -1.0] }, Vertex { position: [-0.9, -0.5, -0.5], normal: [-0.9, 0.0, 0.5] }, Vertex { position: [0.9, -0.5, -0.5], normal: [0.9, 0.0, 0.5] }, Vertex { position: [-0.9, -0.5, -0.5], normal: [-0.0, 0.0, -1.0] }, Vertex { position: [0.0, -0.5, 1.0], normal: [-0.9, 0.0, 0.5] }] [3, 0, 6, 9, 3, 6, 4, 1, 7, 10, 4, 7, 5, 2, 8, 11, 5, 8] I also made a Playground. Answer: Some suggestions: Separate generating the mesh from printing the mesh. i.e. return a Mesh class from the generator function and write a separate print_mesh function. I don't think a shape with two radii would normally be described as a cylinder (I think "frustum" is the correct term). Consider providing various associated new_ functions for the Mesh class to generate different shapes e.g. new_cylinder, new_frustum, new_cone. These would take the appropriate parameters for the various names, and forward to a single function behind the scenes (maybe called new_conic_frustum or something). Add some documentation. A user is likely to want more info when calling a function like this: How is the shape oriented? Is it centered at the origin, or does it use the origin as a base-line, etc.
{ "domain": "codereview.stackexchange", "id": 43308, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rust, computational-geometry, graphics, opengl", "url": null }
rust, computational-geometry, graphics, opengl Consider splitting up the Vertex class, and having separate vectors for positions, normals, etc. This is more efficient if we want to skip generating the normals (or UVs, or tangents or whatever if we add them in future). const THETA_END: f32 = 2f32 * std::f32::consts::PI; this already exists, and is called tau: std::f32::consts::TAU. The constants THETA_START, THETA_END and HEIGHT_SEGMENTS would probably be better as function arguments. let sin_theta = theta.sin(); We could just write this inline, there's no point making it a variable. It looks like the generation of the indices doesn't handle values of HEIGHT_SEGMENTS other than 1. So providing HEIGHT_SEGMENTS (even as a constant) is misleading. (Note: I actually disagree with the other answer, and think that plain loops are much clearer in this case. I also think it mis-characterizes "functional" programming).
{ "domain": "codereview.stackexchange", "id": 43308, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rust, computational-geometry, graphics, opengl", "url": null }
error-handling, typescript, angular-2+, rxjs Title: Typescript error handler Question: I wrote a function in Typescript which takes required parameters, optional parameters and parameters with default value. I want to use this function for error handling of http requests. I am not sure, if my solution is good, so I wanted to know the opinion of the Code Review community. Here is my function: public handleError<T>( operation: string = 'operation', customErrorMessage: string, valueToReturn: T, logError = true, showSnackbar?: boolean, showHttpErrorResponse?: boolean ): (error: any) => Observable<any> { return (error: any): Observable<T | Error> => { if (logError) { console.error(error.message); } if (showSnackbar) { if (showHttpErrorResponse) { this.snackbar.open(error.message, 'OK'); } else { this.snackbar.open(customErrorMessage, 'OK'); } } return showHttpErrorResponse ? of(new Error(error.message)) : of(valueToReturn as T); }; } Here is an example of calling: this.httpClient.get('/', { observe: 'response', responseType: 'text' }).pipe( catchError( this.errorService.handleError('getSysName', '', null, true, false, true) ) ); My Questions: How is the selections of the parameter types (usage of required, optional and default parameters)? If it is not good, how would you improve it? How is the naming of the parameters? How is the order of the parameters? If it is not good, how would you order the parameters? Do you have also improvement suggestions for the body of the function? Do you have completely different suggestions for improvement? Answer: My understanding of your approach here is to handle errors, which may occur when sending a request to one (of your) server. This function should log and/or display a snackbar depending on the parameters that were passed before. This leads us to your first three questions: Answers to questions 1-3
{ "domain": "codereview.stackexchange", "id": 43309, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "error-handling, typescript, angular-2+, rxjs", "url": null }
error-handling, typescript, angular-2+, rxjs operation: I noticed that the operation is unused and can be removed, what is it even used for? Why is the default value 'operation'? customErrorMessage: So the custom message could basically be an empty string ("") which would leave the snackbar which pops-up empty with I guess an 'OK' button?! This seems like bad UX or a bug. You may want to use something like customErrorMessage: string | undefined and explicitly check that the string is not empty. Also see [1] valueToReturn: We are handling errors, which may occur, why should this method return a value? Which value anyway? Again, is null a good approach here as well (maybe see [1])? Also see [2] logError: I don't like where this is placed, why is the first parameter a default parameter and why is this a default parameter placed in between (see [3]) - What do you think about an optional parameter called disabledLog?: boolean? When passed true, this parameter disables the logging otherwise everything gets logged showSnackbar, showHttpErrorResponse: Those are very self-explanatory I like them, also that those are the last parameters based on their signature is good! Answer to question 4 I'll concentrate on the function body for now, leaving out the rest. The check whether or not to log is a good approach, this seems fine. if (logError) { console.error(error.message); } This looks like a bit of arrow code https://blog.codinghorror.com/flattening-arrow-code/. if (showSnackbar) { if (showHttpErrorResponse) { this.snackbar.open(error.message, 'OK'); } else { this.snackbar.open(customErrorMessage, 'OK'); } } Let's consider the following, which focusses on readability considering previously linked article: if (showSnackbar) { const content = showHttpErrorResponse ? error.message : customErrorMessage; this.snackbar.open(content, 'OK'); }
{ "domain": "codereview.stackexchange", "id": 43309, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "error-handling, typescript, angular-2+, rxjs", "url": null }
error-handling, typescript, angular-2+, rxjs The same as before, we could try to factor out same things return showHttpErrorResponse ? of(new Error(error.message)) : of(valueToReturn as T); Which could look like this: const returnValue = showHttpErrorResponse ? new Error(error.message) : valueToReturn; return of(returnValue); A follow-up question here is: Why does the showHttpErrorResponse dictate whether we return a custom value or Error here? This is not clear from a function signature point of view. Question 5: I would like to use this question for two things, first, why is the return value of handleError at first: (error: any) => Observable<any> and later it is (error: any) => Observable<T | Error>, I'd suggest to align those to values. The last thing to finalize this post, this is what I'd come up with considering the things I've noted. And on top of that, I've decided to change the signature to be an object, which makes the boolean parameters and null/undefined params more readable. handleError<T>(prop: { customErrorMessage: string | undefined, customReturnValue: T | undefined, showSnackbar?: boolean, showHttpErrorResponse?: boolean, disableLogging?: boolean } ): (error: any) => Observable<T | Error> { return (error: any): Observable<T | Error> => { if (!disableLogging) { console.error(error.message); } if (showSnackbar) { const content = showHttpErrorResponse ? error.message : customErrorMessage; this.snackbar.open(content, 'OK'); } const returnValue = showHttpErrorResponse ? new Error(error.message) : customReturnValue; return of(returnValue); }; } With following usage: this.errorService.handleError({ customErrorMessage: undefined, customReturnValue: null, showSnackbar: true, showHttpErrorResponse: false, disableLogging: false })
{ "domain": "codereview.stackexchange", "id": 43309, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "error-handling, typescript, angular-2+, rxjs", "url": null }
error-handling, typescript, angular-2+, rxjs ------ Appendix ------ [1]: Checking that a string is not empty, is also possible with typings to some extend. Something I scribbled quickly was something like: type NotEmptyString<T extends string> = `${T}` extends "" ? never : T; type X = NotEmptyString<""> // X resolves to never type Y = NotEmptyString<" "> // Y resolves to " " [2]: Returning null could be a problematic approach and could potentially cause Runtime exceptions. I for myself try to reduce the usage of null and rather explicitly type my functions to return either a value OR undefined. [3]: In general (across multiple languages, independent of syntax) default parameters come last. This is to avoid signatures like: foo(undefined,undefined,undefined,requiredParamVal,undefined,undefined) Consider this example const foo = (a: number, bar = 1) => {} foo(10) const foofoo = (a = 1, bar: string) => {} foofoo(undefined, "")
{ "domain": "codereview.stackexchange", "id": 43309, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "error-handling, typescript, angular-2+, rxjs", "url": null }
javascript, programming-challenge, sorting, time-limit-exceeded Title: New Year Chaos JavaScript, needs to be sped up Question: Similar to this question but this is for Python Original problem with description on hacker rank I am currently trying to iterate through a large number of arrays and count how many times numbers have been swapped. A sorted array looks like this: [1,2,3,4,5] and a number can be swapped only towards the front (counting down to 0) twice. If a number is more than 2 out of order the array is deemed 'Too chaotic' and the process should stop. Instead of bubble sorting I am simply going through and counting the actual swaps. As a sorted array is not actually required, my code works except for a couple of the tests where it times out due to large arrays. Any ideas on how to speed this process up? function minimumBribes(q) { console.log(sort(q)); function sort(items) { let bribes = 0; for (let i = 0; i < items.length; i++) { if (items[i] - (i + 1) > 2) return "Too chaotic"; for (let j = 0; j < i; j++) { if (items[j] > items[i]) bribes++; } } return bribes; } }
{ "domain": "codereview.stackexchange", "id": 43310, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, programming-challenge, sorting, time-limit-exceeded", "url": null }
javascript, programming-challenge, sorting, time-limit-exceeded Answer: Reduce the inner loops iteration count. The problem is the inner loop is looping over too many items. The result is that you spend too much time processing data you know is irrelevant. As the function should exit if it detects a position has made over 2 bribes, you need only have the inner loop check positions down 2 from the item you are checking, and not from the start of the line. Quicker solution It only requires a slight modification of your code, but as you have complicated the situation by calling an inner function sort the example has just removed the inner function. The line for (j = pos-2; j < i; j++) { is where the improvement is with pos being item[i] in your function. function minBribe(queue) { var bribes = 0, i, j; for (i = 0; i < queue.length; i++) { const pos = queue[i], at = i + 1; if (pos - at > 2) { return "Too chaotic" } for (j = Math.max(0, pos - 2); j < i; j++) { if (queue[j] > pos) { bribes++ } } } return bribes; } This brings the solution down from \$O(n^2)\$ to near \$O(n)\$ however the number of bribes is a factor so its closer to \$O(n + (m^{0.5}/2))\$ where \$m\$ is the number of bribes. (this is only an approximation)
{ "domain": "codereview.stackexchange", "id": 43310, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, programming-challenge, sorting, time-limit-exceeded", "url": null }
javascript, algorithm, integer, complexity Title: Long arithmetic addition in JS Question: The implementation of the algorithm which adds two number in string form with arithmetic rules. var assert = require('assert'); var strAdd = function(lnum, rnum) { lnum = lnum.split('').reverse(); rnum = rnum.split('').reverse(); var len = Math.max(lnum.length, rnum.length), acc = 0, res = []; for(var i = 0; i < len; i++) { var subres = Number(lnum[i] || 0) + Number(rnum[i] || 0) + acc; acc = ~~(subres / 10); // integer division res.push(subres % 10); } if (acc !== 0) { res.push(acc); } return res.reverse().join(''); }; assert(strAdd('1', '9') === '10'); assert(strAdd('1', '0') === '1'); assert(strAdd('5', '5') == '10'); assert(strAdd('2', '2') === '4'); assert(strAdd('20', '202') === '222'); Is there a better way(in terms of complexity) to achieve the same result? Any style advice is appreciated.
{ "domain": "codereview.stackexchange", "id": 43311, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, algorithm, integer, complexity", "url": null }
javascript, algorithm, integer, complexity Answer: Performance improvements related to approach Interger base For better performance we can split number not by every digit but by set of digits with given base. Look at Number.MAX_SAFE_INTEGER and some math \$ max\_number = 2 ^ {53} - 1 = 9007199254740991 \$ \$ digits = \lfloor \log_{10} max\_number \rfloor \$ \$ base = 10 ^ { digits } \$ Note: base will be lower by one if logarithm gives interger. What I want to say? \$ max\_number \$ can hold \$ digits \$ digits to be able to handle overflowing during additions. What meaning of \$ base \$ ? Currently you are using algo with \$ base = 10 ^ {digits} = 10 \$. Here res.push(subres % 10); you are performing modulo operation with base equals 10. Which means you are storing array with numbers that consists of single digit. But Number allows you to store up to 15 digits. Execute Math.log10(Number.MAX_SAFE_INTEGER) in you browser to find this value (do not forger to take floor of resulted decimal). Look at c++ example at e-maxx.ru for idea how to implement this. I have not found good enough english article, but you can use google translate. Benchmark base \$ 10^1 \$ vs base \$ 10^{15} \$ function plain(lnum, rnum) { lnum = lnum.split('').reverse(); rnum = rnum.split('').reverse(); var len = Math.max(lnum.length, rnum.length) , acc = 0 , res = []; for (var i = 0; i < len; i++) { var subres = Number(lnum[i] || 0) + Number(rnum[i] || 0) + acc; acc = ~~(subres / 10); res.push(subres % 10); } if (acc !== 0) { res.push(acc); } return res.reverse().join(''); } DIGITS = Math.floor(Math.log10(Number.MAX_SAFE_INTEGER)) - 1 BIG_INTEGER_BASE = Math.pow(10, DIGITS); FILL_STRING = (BIG_INTEGER_BASE + '').substr(1) function readBigInteger(str, base) { var res = []; for (var i = str.length; i > 0; i -= base) if (i < base) res.push(Number(str.substr(0, i))); else res.push(Number(str.substr(i - base, base))); return res; }
{ "domain": "codereview.stackexchange", "id": 43311, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, algorithm, integer, complexity", "url": null }
javascript, algorithm, integer, complexity function printBigInteger(integer, base) { for (var i = 0; i + 1 < integer.length; ++i) { var s = FILL_STRING + integer[i] integer[i] = s.substr(s.length - base) } return integer.reverse().join(''); } function plainWithDifferentBase (a, b) { lnum = readBigInteger(a, DIGITS) rnum = readBigInteger(b, DIGITS) var len = Math.max(lnum.length, rnum.length) , acc = 0 , res = []; for (var i = 0; i < len; i++) { var subres = (lnum[i] || 0) + (rnum[i] || 0) + acc; acc = ~~(subres / BIG_INTEGER_BASE); res.push(subres % BIG_INTEGER_BASE); } if (acc !== 0) { res.push(acc); } return printBigInteger(res, DIGITS); } var fib = function(num, add) { var prev = '1', curr = '1', temp; while (curr.length < num) { temp = curr; curr = add(prev, curr); prev = temp; } return curr; }; SIZE = 10000 console.time("plainWithDifferentBase"); fib(SIZE, plainWithDifferentBase); console.timeEnd("plainWithDifferentBase"); console.time("plain"); fib(SIZE, plain); console.timeEnd("plain"); Results Using \$ base = 10^{15} \$ is 90 times efficiently. plainWithDifferentBase: 23848ms plain: 182856ms Performance improvements related to interpreter Look at this: formatting list of variables converting character to integer using charCodeAt preallocating array of required size using standard language library API (builtins)
{ "domain": "codereview.stackexchange", "id": 43311, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, algorithm, integer, complexity", "url": null }
javascript, algorithm, integer, complexity function characterToInt(char) { // 48 is char code of zero. return char.charCodeAt(0) - 48; } function longAdd(lnum, rnum) { // Here we didn't use lambda function (anonimous) to prevent creating // additaional objects and give less work to GC. lnum = lnum.split('').reverse().map(characterToInt); rnum = rnum.split('').reverse().map(characterToInt); // With comma as third character you didn't broke approach of formating // using 2 whitespaces. var len = Math.max(lnum.length, rnum.length) , acc = 0 // Allocate required space for an array to prevent reallocation overhead , res = new Array(len); for (var i = 0; i < len; ++i) { var subres = (lnum[i] || 0) + (rnum[i] || 0) + acc; // Use Math.floor instead of 2 additional operations, as it is library // function which can be written with some optimizations. acc = Math.floor(subres / 10); res[i] = subres % 10; } if (acc !== 0) { res.push(acc); } return res.reverse().join(''); } Possible ways to allocate array of required size: var a = new Array(size) // 1 var a = []; a.length = size // 2 Note: it is not final or most optimized version ever, I have tried to show you set of approached you might want to know to optimize your code even further. Benchmark using browser Just open console in your browser and paste following code var strAdd = function(lnum, rnum) { lnum = lnum.split('').reverse(); rnum = rnum.split('').reverse(); var len = Math.max(lnum.length, rnum.length), acc = 0; res = []; for(var i = 0; i < len; i++) { var subres = Number(lnum[i] || 0) + Number(rnum[i] || 0) + acc; acc = ~~(subres / 10); // integer division res.push(subres % 10); } if (acc !== 0) { res.push(acc); } return res.reverse().join(''); }; function characterToInt(char) { return char.charCodeAt(0) - 48; }
{ "domain": "codereview.stackexchange", "id": 43311, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, algorithm, integer, complexity", "url": null }
javascript, algorithm, integer, complexity function characterToInt(char) { return char.charCodeAt(0) - 48; } function longAdd(lnum, rnum) { lnum = lnum.split('').reverse().map(characterToInt); rnum = rnum.split('').reverse().map(characterToInt); var len = Math.max(lnum.length, rnum.length) , acc = 0 , res = new Array(len); for (var i = 0; i < len; ++i) { var subres = (lnum[i] || 0) + (rnum[i] || 0) + acc; acc = Math.floor(subres / 10); res[i] = subres % 10; } if (acc !== 0) { res.push(acc); } return res.reverse().join(''); } var fib = function(num, add) { var prev = '1', curr = '1', temp; while (curr.toString().length !== num) { temp = curr; curr = add(prev, curr); prev = temp; } return curr; }; console.time("preallocated"); fib(1000, longAdd); console.timeEnd("preallocated"); console.time("plain"); fib(1000, strAdd); console.timeEnd("plain"); My results are: Google Chrome 46.0.2490.80 preallocated: 1620.238ms plain: 4311.755ms Mozilla Firefox 41.0.2 preallocated: 849.72ms plain: 3747.05ms Nodejs and preallocation Lets start with this test where we conditionally switches allocation from static to dynamic. var plain = function(lnum, rnum) { lnum = lnum.split('').reverse(); rnum = rnum.split('').reverse(); var len = Math.max(lnum.length, rnum.length), acc = 0; res = []; for(var i = 0; i < len; i++) { var subres = Number(lnum[i] || 0) + Number(rnum[i] || 0) + acc; acc = ~~(subres / 10); // integer division res.push(subres % 10); } if (acc !== 0) { res.push(acc); } return res.reverse().join(''); };
{ "domain": "codereview.stackexchange", "id": 43311, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, algorithm, integer, complexity", "url": null }
javascript, algorithm, integer, complexity var preallocated = function(lnum, rnum) { lnum = lnum.split('').reverse(); rnum = rnum.split('').reverse(); var len = Math.max(lnum.length, rnum.length), acc = 0; res = []; if (len > 1000) res.length = len; for(var i = 0; i < len; i++) { var subres = Number(lnum[i] || 0) + Number(rnum[i] || 0) + acc; acc = ~~(subres / 10); // integer division if (len > 1000) res[i] = subres % 10 else res.push(subres % 10); } if (acc !== 0) { if (len > 1000) res = res.concat(acc) else res.push(acc); } return res.reverse().join(''); }; var fib = function(num, add) { var prev = '1', curr = '1', temp; while (curr.toString().length !== num) { temp = curr; curr = add(prev, curr); prev = temp; } return curr; }; console.time("preallocated"); fib(2000, preallocated); console.timeEnd("preallocated"); console.time("plain"); fib(2000, plain); console.timeEnd("plain"); I have results: $ node /tmp/help.js preallocated: 1278ms plain: 4249ms $ node --version v0.12.7 Warning: didn't run benchmark for node in you browser. Google Chrome 46.0.2490.80 gives time preallocated: 61737.298ms plain: 16982.603ms Mozilla Firefox 41.0.2 preallocated: 12759.65ms plain: 14391.38ms Summary We have different results depending on platform. People love js because of 2 things: event loop same codebase for backend and frontend But, as you saw before, you have to optimize frontend and backend in different way. How to solve it is up to you.
{ "domain": "codereview.stackexchange", "id": 43311, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, algorithm, integer, complexity", "url": null }
c#, .net, websocket Title: General web socket client with auto reconnect capabilities Question: This is a web socket client wrapper with auto reconnect capabilities. First of all, I know I should avoid working with strings to reduce allocation, that's on test purpose. So, ignore the fact that it is an unbounded Channel<string>, which is not directly parsing to JSON from ReadOnlyMemory<byte> (stringifies instead) and doesn't have a max message size. I want to have a code review on everything else, i.e. the way it starts/stops/reconnects the web socket, the way I use both classes together, etc. There are two general clients in the code below: GeneralClient2. The initial idea was to signal exit by completing the writer as it's preferred by Microsoft and Marc Gravell. In other words, calling .Complete/.TryComplete on the writer, completes the reading loop (ProcessSendAsync/ProcessDataAsync). Edit: I realized completing the writer is not a good idea in this case because: It requires the Channel<T> to be nullable because a completed writer cannot be reused. This led to nullability checks in SendAsync, which completely killed the point of the channels as I was using them in oppose to ConcurrentQueue + AsyncResetEvent (similar to what's been done here). In other words, I want to be able to enqueue messages anytime I want and not to be getting ChannelClosedException because of some bad timing. Channels have the best performance (producer/consumer pattern) compared to DataFlow, BlockingCollection and basically anything else. If SendAsync is called from multiple threads at the same time, it will throw an exception. It can be avoided using AsyncResetEvent/SemaphoreSlim, etc. Channels allow us to specify SingleReader = true, which acts as a lock. We can make it bounded and restrict it more, so the clien't can't DOS us.
{ "domain": "codereview.stackexchange", "id": 43312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, websocket", "url": null }
c#, .net, websocket GeneralClient uses a CancellationToken to abort the websocket - Similar to this example, which is well documented, even though, that example first tries to gracefully close the socket and if something went side ways, it then aborts it. You can express your opinion on this one, but I would rather have a review on GeneralClient2 as it's the preferred way. using System.Buffers; using System.Diagnostics; using System.Net.WebSockets; using System.Text; using System.Text.Json; using System.Text.Json.Serialization; using System.Threading.Channels; using Nito.AsyncEx; namespace CodeReview; public sealed class GeneralClient2 : IDisposable { private readonly string _url; private readonly SemaphoreSlim _semaphore = new(1, 1); private readonly Channel<string> _incomingMessages = Channel.CreateUnbounded<string>(new UnboundedChannelOptions { SingleReader = false, SingleWriter = true }); private readonly Channel<string> _outgoingMessages = Channel.CreateUnbounded<string>(new UnboundedChannelOptions { SingleReader = true, SingleWriter = false }); private ClientWebSocket? _clientWebSocket; private CancellationTokenSource? _tokenSource; private Task _processingSend = Task.CompletedTask; private Task _processingData = Task.CompletedTask; private Task _processingReceive = Task.CompletedTask; public GeneralClient2(string url) { if (string.IsNullOrWhiteSpace(url)) { throw new ArgumentNullException(nameof(url)); } _url = url; } public bool IsRunning { get; private set; } public event EventHandler? Connected; public event EventHandler? Disconnected; public event EventHandler<MessageReceivedEventArgs>? MessageReceived; public void Dispose() { _semaphore.Dispose(); _incomingMessages.Writer.TryComplete(); _outgoingMessages.Writer.TryComplete(); }
{ "domain": "codereview.stackexchange", "id": 43312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, websocket", "url": null }
c#, .net, websocket public async Task StartAsync() { // Prevent a race condition await _semaphore.WaitAsync().ConfigureAwait(false); try { if (IsRunning) { return; } while (!await ConnectAsync().ConfigureAwait(false)) { } IsRunning = true; Debug.Assert(_clientWebSocket != null); _tokenSource = new CancellationTokenSource(); _processingSend = ProcessSendAsync(_clientWebSocket, _tokenSource.Token); _processingData = ProcessDataAsync(_tokenSource.Token); _processingReceive = ProcessReceiveAsync(_clientWebSocket); } finally { _semaphore.Release(); } } public async Task StopAsync() { Console.WriteLine("Stopping"); if (!IsRunning) { return; } try { if (_clientWebSocket is { State: not (WebSocketState.Aborted or WebSocketState.Closed or WebSocketState.CloseSent) }) { await _clientWebSocket.CloseOutputAsync(WebSocketCloseStatus.NormalClosure, string.Empty, CancellationToken.None).ConfigureAwait(false); } } catch { // Any exception thrown here will be caused by the socket already being closed, // which is the state we want to put it in by calling this method, which // means we don't care if it was already closed and threw an exception // when we tried to close it again. } await _processingReceive.ConfigureAwait(false); Console.WriteLine("Stopped"); } public ValueTask SendAsync(string message) { return _outgoingMessages.Writer.WriteAsync(message); } private async ValueTask<bool> ConnectAsync() { Console.WriteLine("Connecting"); var ws = new ClientWebSocket();
{ "domain": "codereview.stackexchange", "id": 43312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, websocket", "url": null }
c#, .net, websocket var ws = new ClientWebSocket(); try { await ws.ConnectAsync(new Uri(_url), CancellationToken.None).ConfigureAwait(false); Connected?.Invoke(this, EventArgs.Empty); } catch (Exception) // TaskCanceledException & WebSocketException { ws.Dispose(); return false; } _clientWebSocket = ws; Console.WriteLine("Connected"); return true; } private async Task ProcessSendAsync(WebSocket webSocket, CancellationToken cancellationToken) { try { while (await _outgoingMessages.Reader.WaitToReadAsync(cancellationToken).ConfigureAwait(false)) { while (_outgoingMessages.Reader.TryRead(out var message)) { // "SingleReader = true" acts as a lock. // The lock is required because the client will throw an exception if SendAsync is // called from multiple threads at the same time. But this issue only happens with several // framework versions. var data = new ArraySegment<byte>(Encoding.UTF8.GetBytes(message)); await webSocket.SendAsync(data, WebSocketMessageType.Text, true, CancellationToken.None).ConfigureAwait(false); } } } catch (OperationCanceledException) { // normal upon task/token cancellation, disregard } Console.WriteLine("Send loop end"); }
{ "domain": "codereview.stackexchange", "id": 43312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, websocket", "url": null }
c#, .net, websocket Console.WriteLine("Send loop end"); } private async Task ProcessDataAsync(CancellationToken cancellationToken) { try { while (await _incomingMessages.Reader.WaitToReadAsync(cancellationToken).ConfigureAwait(false)) { while (_incomingMessages.Reader.TryRead(out var message)) { await ProcessMessageAsync(message).ConfigureAwait(false); } } } catch (OperationCanceledException) { // normal upon task/token cancellation, disregard } Console.WriteLine("Data loop end"); } private Task ProcessMessageAsync(string message) { MessageReceived?.Invoke(this, new MessageReceivedEventArgs(message)); return Task.CompletedTask; } private async Task ProcessReceiveAsync(WebSocket webSocket) { Debug.Assert(_incomingMessages != null && _outgoingMessages != null); try { while (true) { ValueWebSocketReceiveResult receiveResult; using var buffer = MemoryPool<byte>.Shared.Rent(4096); await using var ms = new MemoryStream(buffer.Memory.Length); do { receiveResult = await webSocket.ReceiveAsync(buffer.Memory, CancellationToken.None).ConfigureAwait(false); if (receiveResult.MessageType == WebSocketMessageType.Close) { break; } await ms.WriteAsync(buffer.Memory[..receiveResult.Count], CancellationToken.None).ConfigureAwait(false); } while (!receiveResult.EndOfMessage); ms.Seek(0, SeekOrigin.Begin);
{ "domain": "codereview.stackexchange", "id": 43312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, websocket", "url": null }
c#, .net, websocket ms.Seek(0, SeekOrigin.Begin); if (receiveResult.MessageType == WebSocketMessageType.Text) { using var reader = new StreamReader(ms, Encoding.UTF8); var message = await reader.ReadToEndAsync().ConfigureAwait(false); await _incomingMessages.Writer.WriteAsync(message, CancellationToken.None).ConfigureAwait(false); } else if (receiveResult.MessageType == WebSocketMessageType.Close) { await CloseAsync().ConfigureAwait(false); return; } } } catch (WebSocketException ex) when (ex.WebSocketErrorCode == WebSocketError.ConnectionClosedPrematurely) { Console.WriteLine("WebSocketException prematurely"); await CloseAsync().ConfigureAwait(false); await StartAsync().ConfigureAwait(false); } } private async Task CloseAsync() { // Cancel loops _tokenSource?.Cancel(); _tokenSource?.Dispose(); // Wait for the tasks to finish await Task.WhenAll(_processingSend, _processingData).ConfigureAwait(false); // Prevent a leak _clientWebSocket?.Dispose(); Disconnected?.Invoke(this, EventArgs.Empty); IsRunning = false; } } public sealed class GeneralClient : IDisposable { private readonly string _url; private readonly Channel<string> _incomingMessages; private readonly Channel<string> _outgoingMessages; private readonly AsyncManualResetEvent _connectionResetEvent = new(false); private ClientWebSocket? _clientWebSocket; private CancellationTokenSource? _tokenSource; private Task _processingSend = Task.CompletedTask; private Task _processingData = Task.CompletedTask; private Task _processingLoop = Task.CompletedTask;
{ "domain": "codereview.stackexchange", "id": 43312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, websocket", "url": null }
c#, .net, websocket public GeneralClient(string url) { if (string.IsNullOrWhiteSpace(url)) { throw new ArgumentNullException(nameof(url)); } _url = url; _incomingMessages = Channel.CreateUnbounded<string>(new UnboundedChannelOptions { SingleReader = false, SingleWriter = true }); _outgoingMessages = Channel.CreateUnbounded<string>(new UnboundedChannelOptions { SingleReader = true, SingleWriter = false }); } public event EventHandler? Connected; public event EventHandler? Disconnected; public event EventHandler<MessageReceivedEventArgs>? MessageReceived; public void Dispose() { _incomingMessages.Writer.TryComplete(); _outgoingMessages.Writer.TryComplete(); } public Task StartAsync() { // avoid race conditions if (!_processingLoop.IsCompleted) { return Task.CompletedTask; } _tokenSource = new CancellationTokenSource(); _processingSend = ProcessSendAsync(_tokenSource.Token); _processingData = ProcessDataAsync(_tokenSource.Token); _processingLoop = Task.Run(async () => { while (!await ConnectAsync().ConfigureAwait(false)) { } }); return Task.CompletedTask; } public async Task StopAsync() { if (_processingLoop.IsCompleted) { return; } Console.WriteLine("Stopping"); _tokenSource?.Cancel(); _tokenSource?.Dispose(); await Task.WhenAll(_processingSend, _processingData, _processingLoop).ConfigureAwait(false); Disconnected?.Invoke(this, EventArgs.Empty); Console.WriteLine("Stopped"); } private async ValueTask<bool> ConnectAsync() { Console.WriteLine("Connecting"); Debug.Assert(_tokenSource != null);
{ "domain": "codereview.stackexchange", "id": 43312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, websocket", "url": null }
c#, .net, websocket Debug.Assert(_tokenSource != null); using var ws = new ClientWebSocket(); try { // This call timeouts in 20 seconds with WebSocketException by default await ws.ConnectAsync(new Uri(_url), CancellationToken.None).ConfigureAwait(false); Connected?.Invoke(this, EventArgs.Empty); } catch (Exception) { return false; } _clientWebSocket = ws; Console.WriteLine("Connected"); _connectionResetEvent.Set(); try { await ProcessReceiveAsync(ws, _tokenSource.Token).ConfigureAwait(false); } catch (OperationCanceledException) { // normal upon task/token cancellation, disregard } catch (WebSocketException ex) when (ex.WebSocketErrorCode == WebSocketError.ConnectionClosedPrematurely) { return false; } catch (Exception) { return false; } _connectionResetEvent.Reset(); Console.WriteLine("End"); return true; } public void Send(string message) { _outgoingMessages.Writer.TryWrite(message); } public ValueTask SendAsync(string message) { return _outgoingMessages.Writer.WriteAsync(message); } private async Task ProcessSendAsync(CancellationToken cancellationToken) { try { while (await _outgoingMessages.Reader.WaitToReadAsync(cancellationToken).ConfigureAwait(false)) { while (_outgoingMessages.Reader.TryRead(out var message)) { await _connectionResetEvent.WaitAsync(cancellationToken).ConfigureAwait(false); Debug.Assert(_clientWebSocket != null);
{ "domain": "codereview.stackexchange", "id": 43312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, websocket", "url": null }
c#, .net, websocket Debug.Assert(_clientWebSocket != null); var data = new ArraySegment<byte>(Encoding.UTF8.GetBytes(message)); await _clientWebSocket.SendAsync(data, WebSocketMessageType.Text, true, CancellationToken.None).ConfigureAwait(false); } } } catch (OperationCanceledException) { // normal upon task/token cancellation, disregard } Console.WriteLine("Send loop end"); } private async Task ProcessDataAsync(CancellationToken cancellationToken) { try { while (await _incomingMessages.Reader.WaitToReadAsync(cancellationToken).ConfigureAwait(false)) { while (_incomingMessages.Reader.TryRead(out var message)) { await ProcessMessageAsync(message).ConfigureAwait(false); } } } catch (OperationCanceledException) { // normal upon task/token cancellation, disregard } Console.WriteLine("Data loop end"); } private Task ProcessMessageAsync(string message) { MessageReceived?.Invoke(this, new MessageReceivedEventArgs(message)); return Task.CompletedTask; } private async Task ProcessReceiveAsync(WebSocket webSocket, CancellationToken cancellationToken) { while (true) { ValueWebSocketReceiveResult receiveResult; using var buffer = MemoryPool<byte>.Shared.Rent(4096); await using var ms = new MemoryStream(buffer.Memory.Length); do { receiveResult = await webSocket.ReceiveAsync(buffer.Memory, cancellationToken).ConfigureAwait(false); if (receiveResult.MessageType == WebSocketMessageType.Close) { break; }
{ "domain": "codereview.stackexchange", "id": 43312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, websocket", "url": null }
c#, .net, websocket await ms.WriteAsync(buffer.Memory[..receiveResult.Count], cancellationToken).ConfigureAwait(false); } while (!receiveResult.EndOfMessage); ms.Seek(0, SeekOrigin.Begin); if (receiveResult.MessageType == WebSocketMessageType.Text) { using var reader = new StreamReader(ms, Encoding.UTF8); var message = await reader.ReadToEndAsync().ConfigureAwait(false); await _incomingMessages.Writer.WriteAsync(message, CancellationToken.None).ConfigureAwait(false); } else if (receiveResult.MessageType == WebSocketMessageType.Close) { break; } } } } public sealed class DeribitClient : IDisposable { private readonly GeneralClient2 _client; public DeribitClient() { _client = new GeneralClient2("wss://www.deribit.com/ws/api/v2"); _client.Connected += OnConnected; _client.Disconnected += OnDisconnected; _client.MessageReceived += OnMessageReceived; } public void Dispose() { _client.Connected -= OnConnected; _client.Disconnected -= OnDisconnected; _client.MessageReceived -= OnMessageReceived; _client.Dispose(); } public Task StartAsync() { return _client.StartAsync(); } public Task StopAsync() { return _client.StopAsync(); } public ValueTask SubscribeToDeribitPriceIndexAsync() { string[] subscriptions = { "deribit_price_index.btc_usd" }; var @params = new Dictionary<string, dynamic?> { { "channels", subscriptions } }; var request = new JsonRpcRequest("2.0", Guid.NewGuid(), "public/subscribe", @params); var message = JsonSerializer.Serialize(request); return _client.SendAsync(message); } private void OnConnected(object? sender, EventArgs e) { }
{ "domain": "codereview.stackexchange", "id": 43312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, websocket", "url": null }
c#, .net, websocket private void OnConnected(object? sender, EventArgs e) { } private void OnDisconnected(object? sender, EventArgs e) { } private void OnMessageReceived(object? sender, MessageReceivedEventArgs e) { Console.WriteLine($"Message received: {e.Message}{Environment.NewLine}"); } } public class MessageReceivedEventArgs : EventArgs { public MessageReceivedEventArgs(string message) { Message = message; } public string Message { get; } } public record JsonRpcRequest( [property: JsonPropertyName("jsonrpc")] string JsonRpc, [property: JsonPropertyName("id")] Guid Id, [property: JsonPropertyName("method")] string Method, [property: JsonPropertyName("params")] object Params); It can be executed through the following: using CodeReview; using var client = new DeribitClient(); await client.StartAsync().ConfigureAwait(false); await client.SubscribeToDeribitPriceIndexAsync().ConfigureAwait(false); //await Task.Delay(5000).ConfigureAwait(false); //await client.StopAsync().ConfigureAwait(false); //await Task.Delay(5000).ConfigureAwait(false); //await client.StartAsync().ConfigureAwait(false); Console.ReadLine(); Answer: Here are my observations DeribitClient SubscribeToDeribitPriceIndexAsync The re-creation of the @params collection for each function call is unnecessary You can create a static private field for that I would also suggest to replace dynamic? type to object static Dictionary<string, object> rpcRequestParameter = new() { { "channels", new [] { "deribit_price_index.btc_usd" } } }; public ValueTask SubscribeToDeribitPriceIndexAsync() { var request = new JsonRpcRequest("2.0", Guid.NewGuid(), "public/subscribe", rpcRequestParameter); var message = JsonSerializer.Serialize(request); return _client.SendAsync(message); } GeneralClient2 _processingXYZ Maybe it is just me but it seems really weird to initialize a Task field with Task.CompletedTask constructor
{ "domain": "codereview.stackexchange", "id": 43312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, websocket", "url": null }
c#, .net, websocket constructor I would suggest to check that the provided string is a valid url Either use Uri.IsWellFormedUriString Or use Uri.TryCreate StartAsync _semaphore I'm not sure why did you protect only StartAsync method from race-condition Don't you need to protect StopAsync as well? Please also spend sometime to find some better name for this field, like startExclusiveLock ClientWebSocket Why don't you make ws a class level variable? Passing this object around different methods feels a bit unnatural await StartAsync() inside catch It does seems like it can be an infinite loop, since the method calls recursive itself without any exit condition Console.WriteLine I guess this class is intended to be released as a part of a library based on the ConfigurateAwait(false) calls You don't know whether this lib will be used inside a WPF application or in an ASP.NET Core app Please try to avoid using Console class inside a library class UPDATE #1 StopAsync IsRunning I'm not sure whether or not it is a good idea to expose IsRunning It does feel like it is part of the internal state You are just making shortcuts based on its value DeribitClient does not use that property at all State: not (WebSocketState.Aborted or ... Turn your guard expression into an early exit if (_clientWebSocket is { State: WebSocketState.Aborted or WebSocketState.Closed or WebSocketState.CloseSent }) return; await _clientWebSocket.CloseOutputAsync(WebSocketCloseStatus.NormalClosure, string.Empty, CancellationToken.None).ConfigureAwait(false); Based on the documentation only the Aborted state could cause exception If the documentation is correct then you don't need the try-catch block SendAsync I would suggest to pass a CancellationToken with time constraint to the WriteAsync in order to avoid infinite waiting ConnectAsync The new Uri(_url) is error-prone Please validate your input as soon as you receive it
{ "domain": "codereview.stackexchange", "id": 43312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, websocket", "url": null }
c#, .net, websocket ConnectAsync The new Uri(_url) is error-prone Please validate your input as soon as you receive it catch (Exception) // TaskCanceledException & WebSocketException Either have a simple catch without any Exception type Or use a when statement catch (Exception ex) when (ex is OperationCanceledException or WebSocketException) ProcessSendAsync Here only the WaitToReadAsync call could throw OperationCanceledException So, you should wrap only that call with the try-catch bool shouldContinue; do { try { shouldContinue = await _outgoingMessages.Reader.WaitToReadAsync(cancellationToken).ConfigureAwait(false); } catch (OperationCanceledException) { shouldContinue = false; } while (_outgoingMessages.Reader.TryRead(out var message)) { var data = new ArraySegment<byte>(Encoding.UTF8.GetBytes(message)); await webSocket.SendAsync(data, WebSocketMessageType.Text, true, CancellationToken.None).ConfigureAwait(false); } } while (shouldContinue); ProcessDataAsync Same as with the previous + the ProcessMessageAsync could be inlined bool shouldContinue; do { try { shouldContinue = await _incomingMessages.Reader.WaitToReadAsync(cancellationToken).ConfigureAwait(false); } catch (OperationCanceledException) { shouldContinue = false; } while (_incomingMessages.Reader.TryRead(out var message)) { MessageReceived?.Invoke(this, new MessageReceivedEventArgs(message)); } } while (shouldContinue); ProcessReceiveAsync Debug.Assert is unnecessary since the Channels are declared as readonly It is strange that this method does not receive a CancellationToken as a parameter I would suggest to split this giant infinite loop into smaller chunks
{ "domain": "codereview.stackexchange", "id": 43312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, websocket", "url": null }
c#, .net, websocket private async Task ProcessReceiveAsync(WebSocket webSocket) { while (true) { try { var (messageStream, receiveResult) = await ReadFromSocket(webSocket); await ForwardIncommingMessage(messageStream, receiveResult); if (receiveResult.MessageType == WebSocketMessageType.Close) { await CloseAsync().ConfigureAwait(false); return; } } catch (WebSocketException ex) when (ex.WebSocketErrorCode == WebSocketError.ConnectionClosedPrematurely) { await CloseAsync().ConfigureAwait(false); await StartAsync().ConfigureAwait(false); } } } private async Task<(MemoryStream, ValueWebSocketReceiveResult)> ReadFromSocket(WebSocket webSocket) { ValueWebSocketReceiveResult receiveResult; using var buffer = MemoryPool<byte>.Shared.Rent(4096); await using var ms = new MemoryStream(buffer.Memory.Length); do { receiveResult = await webSocket.ReceiveAsync(buffer.Memory, CancellationToken.None).ConfigureAwait(false); if (receiveResult.MessageType == WebSocketMessageType.Close) break; await ms.WriteAsync(buffer.Memory[..receiveResult.Count], CancellationToken.None).ConfigureAwait(false); } while (!receiveResult.EndOfMessage); ms.Seek(0, SeekOrigin.Begin); return (ms, receiveResult); } private async Task ForwardIncommingMessage(MemoryStream messageStream, ValueWebSocketReceiveResult receiveResult) { if (receiveResult.MessageType != WebSocketMessageType.Text) return; using var reader = new StreamReader(messageStream, Encoding.UTF8); var message = await reader.ReadToEndAsync().ConfigureAwait(false); await _incomingMessages.Writer.WriteAsync(message, CancellationToken.None).ConfigureAwait(false); } Disclaimer: I haven't reviewed GeneralClient only GeneralClient2
{ "domain": "codereview.stackexchange", "id": 43312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, websocket", "url": null }
python, tree, machine-learning Title: Decision Tree for classification tasks in Python Question: I've decided to implement the ID3 Decision Tree algorithm in Python based on what I've learned from George F. Luger's textbook on AI (and other secondary readings). As far as I know, the code is working correctly: import math from random import choice from collections import defaultdict, namedtuple class TreeNode: """Used to implement the tree structure.""" def __init__(self, type = None, value = None): #type may be "property" or "leaf". self.type = type #The edges of the tree are represented as dict keys #and the nodes as the dict values. self.branches = dict() #This variable contains the name of the property represented by the node #or the label, if it's a leaf node. self.value = value class DecisionTree: """This class implements a simple Decision Tree for classification tasks. After providing a dataset, the train method should be used to create the tree. After that, the classify method may be used to classify a new example by traversing the tree.""" def __init__(self, dataset): """The dataset must be a sequence of named tuples representing each sample of the training set, which must contain the values for each attribute and also their label. Attributes might have any name but the samples must have a 'label' attribute.""" self.dataset = dataset self.tree = None self.labels = None def train(self): """Create the decision tree based on the dataset and stores it in self.tree""" if len(self.dataset) == 0: print("The dataset was not provided.") return False
{ "domain": "codereview.stackexchange", "id": 43313, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, tree, machine-learning", "url": null }
python, tree, machine-learning #Find out the list of properties (assuming all examples have the same structure) properties = list(dataset[0]._fields) properties.remove("label") #Find out the possible labels for each sample. self.labels = set([getattr(x, "label") for x in self.dataset]) self.tree = self._induce_tree(self.dataset, properties) def classify(self, example): """Return a string (or a list of strings) containing the label (or possible labels) of the given example.""" tree = self.tree while tree.type != "leaf": tree = tree.branches[getattr(example, tree.value)] return tree.value def _calc_entropy(self, probabilities): """Return the entropy of a given random variable given the probabilities (in the form of a list/tuple) of each of its possible outcomes""" entropy = 0 for p in probabilities: entropy -= 0 if p == 0 else p * math.log(p, 2) return entropy def _calc_expected_entropy(self, partition): """Return the expected information needed to complete the tree""" #Get the total number of elements (population) total_pop = 0 for p in partition.values(): total_pop += len(p) expectation = 0
{ "domain": "codereview.stackexchange", "id": 43313, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, tree, machine-learning", "url": null }
python, tree, machine-learning expectation = 0 for p in partition.values(): p_pop = len(p) p_labels = [getattr(s, "label") for s in p] #Get the probability of each label ocurrency in the given #partition: No. of elements with the label divided by #the total no. of elements of the partitions. probs = [] for label in self.labels: probs.append(p_labels.count(label)/p_pop) #Expected information needed = #Sigma |Partition_i| / Total population * Entropy of Partition_i #for i from 1 to n (no. of partitions). expectation += (p_pop/total_pop) * self._calc_entropy(probs) return expectation def _choose_property(self, example_set, properties): rank = list() for p in properties: #Partition the example set in groups of samples that #share the same value for the property p. partition = defaultdict(lambda: list()) for s in example_set: partition[getattr(s, p)].append(s)
{ "domain": "codereview.stackexchange", "id": 43313, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, tree, machine-learning", "url": null }
python, tree, machine-learning rank.append((p, self._calc_expected_entropy(partition), partition)) #Sort by expected entropy in increasing order. rank.sort(key = lambda x: x[1]) #Since rank is a list of tuples containing the property, #the expected entropy and partition for a subtree with that property #rank[0][0] in the sorted rank will be the property #with the least expected entropy (or with the most significant #information gain) and rank[0][2] will be the partition of #the example set for that property. return (rank[0][0], rank[0][2]) def _induce_tree(self, example_set, properties): """Recursive algorithm for inducing a decision tree based in ID3 algorithm as explained by George F. Luger's textbook (2009). example_set: a list named tuples containing the samples properties: a list of strings containing the remaining properties to be chosen in the tree induction""" #If all samples belong to the same class, then #produce a leaf node for that class (label). labels = set([getattr(s, "label") for s in example_set]) if len(labels) == 1: return TreeNode(type = "leaf", value = list(labels)[0]) #Else if there are no more properties to evaluate #but there are still examples with different labels #pick one at random and produce a leaf node with it. if not properties: return TreeNode(type = "leaf", value = choice(example_set).getatrr("label")) #Else, chose a property and make it the root of a new subtree. p, partition = self._choose_property(example_set, properties) new_root = TreeNode(type = "property", value = p)
{ "domain": "codereview.stackexchange", "id": 43313, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, tree, machine-learning", "url": null }
python, tree, machine-learning new_properties = list(properties) new_properties.remove(p) #Create a new branch in the subtree for each partition. for value, block in partition.items(): new_root.branches[value] = self._induce_tree(block, new_properties) return new_root #This is just auxiliary code... def generate_pydot_graph(graph, node, edge_name = "none"): """A recursive function to convert the generated tree into a pydot graph""" node_name = edge_name + '-' + node.value if node.type == "leaf": shape = "box" else: shape = "ellipse" graph.add_node(pydot.Node(node_name, label = node.value, shape = shape)) if node.type == "property": for edge, sibling in node.branches.items(): sibling_name = edge+'-'+sibling.value graph.add_edge(pydot.Edge(node_name, sibling_name, label = edge)) generate_pydot_graph(graph, sibling, edge)
{ "domain": "codereview.stackexchange", "id": 43313, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, tree, machine-learning", "url": null }
python, tree, machine-learning if __name__ == "__main__": Sample = namedtuple("Sample", ['label', 'credit_history', 'debit', 'collateral', 'income']) dataset = list() dataset.append(Sample('high', 'bad', 'high', 'none', '0-15k')) dataset.append(Sample('high', 'unknown', 'high', 'none', '15-35k')) dataset.append(Sample('moderate', 'unknown', 'low', 'none', '15-35k')) dataset.append(Sample('high', 'unknown', 'low', 'none', '0-15k')) dataset.append(Sample('low', 'unknown', 'low', 'none', '35k+')) dataset.append(Sample('low', 'unknown', 'low', 'adequate', '35k+')) dataset.append(Sample('high', 'bad', 'low', 'none', '0-15k')) dataset.append(Sample('moderate', 'bad', 'low', 'adequate', '35k+')) dataset.append(Sample('low', 'good', 'low', 'none', '35k+')) dataset.append(Sample('low', 'good', 'high', 'adequate', '35k+')) dataset.append(Sample('high', 'good', 'high', 'none', '0-15k')) dataset.append(Sample('moderate', 'good', 'high', 'none', '15-35k')) dataset.append(Sample('low', 'good', 'high', 'none', '35k+')) dataset.append(Sample('high', 'bad', 'high', 'none', '15-35k')) decision_tree = DecisionTree(dataset) decision_tree.train() #Testing the tree... inst_fields = list(Sample._fields) inst_fields.remove("label") NewInstance = namedtuple("NewInstance", inst_fields) tests = list() tests.append(NewInstance("unknown", "low", "none", "15-35k")) #moderate tests.append(NewInstance("good", "node", "none", "0-15k")) #high tests.append(NewInstance("unknown", "high", "adequate", "15-35k")) #high for t in tests: print(decision_tree.classify(t)) #Generate a PNG image with a graphical representation of the tree #to compare with the one in Luger's book :) import pydot graph = pydot.Dot("decision_tree", graph_type = "graph") generate_pydot_graph(graph, decision_tree.tree) graph.write_png("output.png")
{ "domain": "codereview.stackexchange", "id": 43313, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, tree, machine-learning", "url": null }
python, tree, machine-learning I'm particularly interested in feedback on the choices I've made concerning the data structures used (sequences of namedtuples for the dataset, dicts for partitions and so on). I'd also like to know if I commented and documented the code properly and if there are "more pythonic" ways of doing things. My main goal was to implement the algorithm in a way that it would be easy to read and understand, so performance was not a primary concern of mine. Nonetheless, I'd appreciate tips on that too. Thank you!
{ "domain": "codereview.stackexchange", "id": 43313, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, tree, machine-learning", "url": null }
python, tree, machine-learning Answer: You need to type-hint your code. Some of this will be made difficult by the dynamic nature of your field lookups, but you can get 90% of the way there. property and type are poor variable name choices because they shadow built-ins. Don't default type and value as None; you always know what these are going to be so there's no point in leaking nullity into those variables. dict() can just be the literal {}. DecisionTree is troubled by the separation of train from its constructor. There's basically never utility in separating these two methods, since it's expected that train() will always be called after construction, and in the interim the object is in an invalid state. Combining them reveals that the class doesn't actually need to hold onto a dataset at all. Don't return False if the dataset is missing; raise an exception. Your self.labels = set([ is better represented by a set comprehension. _calc_entropy can convert its loop into a sum() with a generator. Also, don't unconditionally subtract with something that might be 0; instead conditionally subtract. Don't getattr(s, "label"); just write s.label. Don't write defaultdict(lambda: list()); list itself is callable and does not need a lambda. (rank[0][0], rank[0][2]) will be made more clear by tuple-unpacking rank[0]. Avoid declaring your Sample and NewInstance in the local namespace; declare them globally with explicit types. Don't import pydot locally; do this at the top. Your __main__ guard is not enough. All of those symbols are still global, and need to be moved into a function. When you do that, a bug is revealed: one of your classes had a reference to the global dataset when it should have used self.dataset. Don't sort followed by [0]; this is equivalent to min(). Suggested import math from collections import defaultdict from random import choice from typing import Collection, Iterable, Literal, NamedTuple, Sequence import pydot
{ "domain": "codereview.stackexchange", "id": 43313, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, tree, machine-learning", "url": null }
python, tree, machine-learning import pydot class Sample(NamedTuple): label: str credit_history: str debit: str collateral: str income: str class NewInstance(NamedTuple): credit_history: str debit: str collateral: str income: str class TreeNode: def __init__(self, type: Literal['property', 'leaf'], value: str) -> None: self.type = type # The edges of the tree are represented as dict keys # and the nodes as the dict values. self.branches: dict[str, TreeNode] = {} # This variable contains the name of the property represented by the node # or the label, if it's a leaf node. self.value = value class DecisionTree: """This class implements a simple Decision Tree for classification tasks. After providing a dataset, the train method should be used to create the tree. After that, the classify method may be used to classify a new example by traversing the tree.""" def __init__(self, dataset: Sequence[Sample]) -> None: """The dataset must be a sequence of named tuples representing each sample of the training set, which must contain the values for each attribute and also their label. Attributes might have any name but the samples must have a 'label' attribute.""" if len(dataset) == 0: raise ValueError("The dataset was not provided.") # Find out the list of properties (assuming all examples have the same structure) properties = [f for f in dataset[0]._fields if f != 'label'] # Find out the possible labels for each sample. self.labels = {sample.label for sample in dataset} # Create the decision tree based on the dataset and stores it in self.tree self.tree = self._induce_tree(dataset, properties)
{ "domain": "codereview.stackexchange", "id": 43313, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, tree, machine-learning", "url": null }
python, tree, machine-learning def classify(self, example: NewInstance) -> str: """Return a string (or a list of strings) containing the label (or possible labels) of the given example.""" tree = self.tree while tree.type != "leaf": tree = tree.branches[getattr(example, tree.value)] return tree.value @staticmethod def _calc_entropy(probabilities: Iterable[float]) -> float: """Return the entropy of a given random variable given the probabilities (in the form of a list/tuple) of each of its possible outcomes""" entropy = -sum( p * math.log(p, 2) for p in probabilities if p != 0 ) return entropy def _calc_expected_entropy(self, partition: Collection[Collection[Sample]]) -> float: """Return the expected information needed to complete the tree""" # Get the total number of elements (population) total_pop = sum(len(p) for p in partition) expectation = 0 for p in partition: p_pop = len(p) p_labels = [s.label for s in p] # Get the probability of each label occurrence in the given # partition: No. of elements with the label divided by # the total no. of elements of the partitions. probs = [ p_labels.count(label) / p_pop for label in self.labels ] # Expected information needed = # Sigma |Partition_i| / Total population * Entropy of Partition_i # for i from 1 to n (no. of partitions). expectation += p_pop / total_pop * self._calc_entropy(probs) return expectation
{ "domain": "codereview.stackexchange", "id": 43313, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, tree, machine-learning", "url": null }
python, tree, machine-learning return expectation def _choose_property( self, example_set: Collection[Sample], properties: Iterable[str], ) -> tuple[ str, # property dict[str, list[Sample]], # partition ]: rank = [] for p in properties: # Partition the example set in groups of samples that # share the same value for the property p. partition = defaultdict(list) for s in example_set: partition[getattr(s, p)].append(s) rank.append((p, self._calc_expected_entropy(partition.values()), partition)) # Sort by expected entropy in increasing order. # Since rank is a list of tuples containing the property, # the expected entropy and partition for a subtree with that property # rank[0][0] in the sorted rank will be the property # with the least expected entropy (or with the most significant # information gain) and rank[0][2] will be the partition of # the example set for that property. property, entropy, partition = min(rank, key=lambda x: x[1]) return property, partition def _induce_tree( self, example_set: Sequence[Sample], properties: Collection[str], ) -> TreeNode: """Recursive algorithm for inducing a decision tree based in ID3 algorithm as explained by George F. Luger's textbook (2009). example_set: a list named tuples containing the samples properties: a list of strings containing the remaining properties to be chosen in the tree induction""" # If all samples belong to the same class, then # produce a leaf node for that class (label). labels = {s.label for s in example_set} if len(labels) == 1: only_label, = labels return TreeNode(type="leaf", value=only_label)
{ "domain": "codereview.stackexchange", "id": 43313, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, tree, machine-learning", "url": null }
python, tree, machine-learning # Else if there are no more properties to evaluate # but there are still examples with different labels # pick one at random and produce a leaf node with it. if not properties: return TreeNode(type="leaf", value=choice(example_set).label) # Else, chose a property and make it the root of a new subtree. p, partition = self._choose_property(example_set, properties) new_root = TreeNode(type="property", value=p) new_properties = [prop for prop in properties if prop != p] # Create a new branch in the subtree for each partition. for value, block in partition.items(): new_root.branches[value] = self._induce_tree(block, new_properties) return new_root def generate_pydot_graph(graph: pydot.Dot, node: TreeNode, edge_name: str = "none") -> None: """A recursive function to convert the generated tree into a pydot graph""" node_name = edge_name + '-' + node.value if node.type == "leaf": shape = "box" else: shape = "ellipse" graph.add_node(pydot.Node(node_name, label=node.value, shape=shape)) if node.type == "property": for edge, sibling in node.branches.items(): sibling_name = edge + '-' + sibling.value graph.add_edge(pydot.Edge(node_name, sibling_name, label=edge)) generate_pydot_graph(graph, sibling, edge)
{ "domain": "codereview.stackexchange", "id": 43313, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, tree, machine-learning", "url": null }
python, tree, machine-learning def main() -> None: dataset = ( Sample( 'high', 'bad', 'high', 'none', '0-15k'), Sample( 'high', 'unknown', 'high', 'none', '15-35k'), Sample('moderate', 'unknown', 'low', 'none', '15-35k'), Sample( 'high', 'unknown', 'low', 'none', '0-15k'), Sample( 'low', 'unknown', 'low', 'none', '35k+'), Sample( 'low', 'unknown', 'low', 'adequate', '35k+'), Sample( 'high', 'bad', 'low', 'none', '0-15k'), Sample('moderate', 'bad', 'low', 'adequate', '35k+'), Sample( 'low', 'good', 'low', 'none', '35k+'), Sample( 'low', 'good', 'high', 'adequate', '35k+'), Sample( 'high', 'good', 'high', 'none', '0-15k'), Sample('moderate', 'good', 'high', 'none', '15-35k'), Sample( 'low', 'good', 'high', 'none', '35k+'), Sample( 'high', 'bad', 'high', 'none', '15-35k'), ) decision_tree = DecisionTree(dataset) tests = ( NewInstance("unknown", "low", "none", "15-35k"), # moderate NewInstance( "good", "node", "none", "0-15k"), # high NewInstance("unknown", "high", "adequate", "15-35k"), # high ) for t in tests: print(decision_tree.classify(t)) # Generate a PNG image with a graphical representation of the tree # to compare with the one in Luger's book :) graph = pydot.Dot("decision_tree", graph_type="graph") generate_pydot_graph(graph, decision_tree.tree) graph.write_png("output.png") if __name__ == "__main__": main() Output
{ "domain": "codereview.stackexchange", "id": 43313, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, tree, machine-learning", "url": null }
c++, algorithm, matrix Title: С++: Find max element from section in matrix Question: I'm a newbie in programming. Could you please check my code and give any refactoring ideas, tips, etc... how it can be improved. I wrote a code to perform the following task: Find in randomly filled square matrix M x M max element which located in given sector, where M - is matrix size. Here is the image of how sectors are located. You should print the matrix, max element and it's position(if there are several of them all of which equal to max, print all their positions in given sector) !Important: elements located on diagonals(both principal and secondary) shouldn't be taken into account!. 2 Examples of program working: Here goes the code: #include <iostream> #include <vector> #include <ctime> #include <cstdlib> #include <cstdint> #include <iomanip> // Sectors // \1/ // 2 X 3 // /4\ void ShowIntro(const std::vector<std::vector<int>>& matrix); int RestrictInput(int lowerBound, int upperBound); std::vector<std::vector<int>> CreateMatrix(int rows, int columns); void FillMatrixWithRandom(std::vector<std::vector<int>>& matrix); void PrintAllElements(const std::vector<std::vector<int>>& matrix); void PrintMaxElements(const std::vector<std::vector<int>>& matrix, int sector); int main() { int M, sector; M = RestrictInput(3, 100); auto matrix = CreateMatrix(M, M); FillMatrixWithRandom(matrix); ShowIntro(matrix); sector = RestrictInput(1, 4); PrintMaxElements(matrix, sector); return 0; } std::vector<std::vector<int>> CreateMatrix(int rows, int columns) { std::vector<std::vector<int>> matrix = std::vector<std::vector<int>>(rows, std::vector<int>(columns)); return matrix; }
{ "domain": "codereview.stackexchange", "id": 43314, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, algorithm, matrix", "url": null }
c++, algorithm, matrix void FillMatrixWithRandom(std::vector<std::vector<int>>& matrix) { srand( (unsigned int) time(0)); //Init Random Number Generator(RNG) //row ~ y axis for (int row = 0; row < matrix.size(); ++row) { // column ~ x for (int column = 0; column < matrix[row].size(); ++column) { matrix[row][column] = rand() % 10; // [0; 99] } } } void PrintAllElements(const std::vector<std::vector<int>>& matrix) { for (const auto& v : matrix) { for (const auto& elem : v) { //setw(2) for better look std::cout << std::setw(2) << elem << ' '; } std::cout << '\n'; } } void PrintMaxElements(const std::vector<std::vector<int>>& matrix, int sector) { int currentMax = INT_MIN;
{ "domain": "codereview.stackexchange", "id": 43314, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, algorithm, matrix", "url": null }
c++, algorithm, matrix //Iterate through matrix and //find max element in given sector auto rows = matrix.size(); auto columns = matrix[0].size(); for (int i = 0; i < rows; ++i) { for (int j = 0; j < columns; ++j) { switch (sector) { case 1: if (i < rows / 2 && i < j && j + i < rows - 1) { if (matrix[i][j] > currentMax) { currentMax = matrix[i][j]; } } break; case 2: if (j < rows / 2 && i > j && j + i < rows - 1) { if (matrix[i][j] > currentMax) { currentMax = matrix[i][j]; } } break; case 3: if (i > rows / 2 && i > j && j + i > rows - 1) { if (matrix[i][j] > currentMax) { currentMax = matrix[i][j]; } } break; case 4: if (j > rows / 2 && i < j && j + i > rows - 1) { if (matrix[i][j] > currentMax) { currentMax = matrix[i][j]; } } break; } } }
{ "domain": "codereview.stackexchange", "id": 43314, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, algorithm, matrix", "url": null }
c++, algorithm, matrix std::cout << "Max element: " << currentMax << '\n'; std::cout << "Positions(x, y): "; //Depending on sector print all positions where max element is for (int y = 0; y < rows; ++y) { for (int x = 0; x < columns; ++x) { switch (sector) { case 1: if (y < rows / 2 && y < x && x + y < rows - 1) { if (matrix[y][x] == currentMax) { std::cout << "(" << x << ", " << y << ")\n"; } } break; case 2: if (x < rows / 2 && y > x && x + y < rows - 1) { if (matrix[y][x] == currentMax) { std::cout << "(" << x << ", " << y << ")\n"; } } break; case 3: if (y > rows / 2 && y > x && x + y > rows - 1) { if (matrix[y][x] == currentMax) { std::cout << "(" << x << ", " << y << ")\n"; } } break; case 4: if (x > rows / 2 && y < x && x + y > rows - 1) { if (matrix[y][x] == currentMax) { std::cout << "(" << x << ", " << y << ")\n"; } } break; } } } } void ShowIntro(const std::vector<std::vector<int>>& matrix) { std::cout << "Generated random matrix " << matrix.size() << "X" << matrix[0].size() << " :\n"; PrintAllElements(matrix); std::cout << '\n'; std::cout << "Choose sector in which max element will be found:\n"; std::cout << " \\1/ \n" << "2 X 4 \n" << " /3\\ \n"; std::cout << "Sector: \n"; }
{ "domain": "codereview.stackexchange", "id": 43314, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, algorithm, matrix", "url": null }
c++, algorithm, matrix int RestrictInput(int lowerBound, int upperBound) { //Read value until it's >= lowerBound AND <= upperBound int inputValue; do { std::cout << "Enter value [" << lowerBound << ":" << upperBound << "]: \n"; std::cin >> inputValue; } while (inputValue < lowerBound || inputValue > upperBound); return inputValue; } Especially, in function PrintMaxElements() there is a chunk of code, where switch(sector) and it is responsible for checking if element belongs to a given sector. I believe, this part can be rewriten in more intelligent or pretty way. Here go 2 images how if condition works in this switch part. In switch we have if conditions. Explanation for looking for a max element in section #1. We get randomly filled matrix. Remember we don't count diagonal elements. In if's first part we have y < rows / 2 which is shown as green rectangle. In if's second part we have y < x which means that elements must be above the main(principal) diagonal as orange triangle. Then in third(last) if's part we have x + y < rows - 1 which means that elements must be under secondary diagonal. Having 3, 4, 5 points united in one if statement we get the area which marked as cyan triangle where we should be looking for max element(red circle). Thanks in advance:) Everybody have a great day.
{ "domain": "codereview.stackexchange", "id": 43314, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, algorithm, matrix", "url": null }
c++, algorithm, matrix Answer: Create a class Matrix Instead of passing around vectors of vectors, consider creating a class Matrix. That makes the code much more readable. Also consider adding a function to access an element at a given row and column, this way you can change the way the matrix is stored internally without having to change any of the code that uses it. Store the matrix as a one-dimensional vector Vectors of vectors are not very efficient. Consider storing the matrix elements in a single, one-dimensional std::vector. Create a function to access elements, that will take a row and column and will convert that into an index into the vector. Use C++'s random number generators Instead of using srand() and rand(), I recommend that you use random number generation functions from C++'s standard library. You only need to check two triangles From your own diagrams it is clear that you only need to check if a given element is in the overlapping region of two triangles, the rectangle is redundant. However: It's faster to directly construct the triangle of interest Your solution visit every element of the matrix, but less than a quarter will be in the desired region. So a lot of CPU time is wasted checking elements you are not interested in. But it's actually quite easy to just visit those elements that are in the desired triangle. For example, for the top triangle, you just need to look at all rows with indices < rows / 2 (the rectangle from diagram 3). Furthermore, for the first row in that region, you start at column 1 up until column columns - 1, then for the second row it's starting from column 2 up to columns - 2, and so on: switch (sector) { case 1: for (i = 0; i < rows / 2; ++i) { for (j = i + 1; j < columns - i - 1; ++j) { currentMax = std::max(currentMax, matrix[i][j]); } } break; ... }
{ "domain": "codereview.stackexchange", "id": 43314, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, algorithm, matrix", "url": null }
c++, algorithm, matrix Going further Even nicer would be to have a Matrix class with a function to return a pair of iterators or perhaps even a std::ranges::view that represents a sector. That way, you can pass the iterators or view to existing standard library functions that return the max element or find all elements matching a given value. That way you could write something like: auto range = matrix.getSector(sector); auto currentMax = *std::ranges::max_element(range);
{ "domain": "codereview.stackexchange", "id": 43314, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, algorithm, matrix", "url": null }
javascript, algorithm, programming-challenge, strings, interview-questions Title: Find the 'n' most frequent words in a text, aka word frequency Question: I'm doing freeCodeCamp's Coding Interview Prep to improve my JavaScript skills. This challenge is called "Word Frequency", and is based on the Rosetta Code's entry of the same name. The version I'm solving says: Given a text string and an integer n, return the n most common words in the file (and the number of their occurrences) in decreasing frequency. Write a function to count the occurrences of each word and return the n most commons words along with the number of their occurrences in decreasing frequency. The function should return a 2D array with each of the elements in the following form: [word, freq]. word should be the lowercase version of the word and freq the number denoting the count. The function should return an empty array, if no string is provided. The function should be case insensitive, for example, the strings "Hello" and "hello" should be treated the same. You can treat words that have special characters such as underscores, dashes, apostrophes, commas, etc., as distinct words. For example, given the string "Hello hello goodbye", your function should return [['hello', 2], ['goodbye', 1]]. For this solution I'm trying to be concise and trying to use modern features of the language. Here is the code: const wordSplit = (text) => text .replace(/[.,:;?!]/g, '') .split(/\s/) .filter(word => word !== '') .map(word => word.toLowerCase()) const countWords = (text) => { const wordCount = {} const count = (word) => { wordCount[word] ? wordCount[word]++ : wordCount[word] = 1 } wordSplit(text).forEach(word => count(word)) return wordCount } const wordFrequency = (text = '', topn) => Object .entries(countWords(text)) .sort((a, b) => b[1] - a[1]) .slice(0, topn) An example of using the code: console.log(wordFrequency("Don't you want to know what I don't know?", 3)) That will print: [ [ "don't", 2 ], [ 'know', 2 ], [ 'you', 1 ] ]
{ "domain": "codereview.stackexchange", "id": 43315, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, algorithm, programming-challenge, strings, interview-questions", "url": null }
javascript, algorithm, programming-challenge, strings, interview-questions That will print: [ [ "don't", 2 ], [ 'know', 2 ], [ 'you', 1 ] ] I have some specific, optional, questions: Do you consider this code easily readable and maintainable? If not, what would you change to improve this area? Can this be written more concisely? (without sacrificing too much readability) Can this code be improved using other features of the language? Perhaps more modern features? Can we use the nullish coalescing operator (??) or the local nullish assignment operator (??=) to simplify the assignment and increment currently done in the ternary operator in the count function? But I'm also very interested in anything else you can think of that can improve this implementation, or change it in interesting ways, including more efficient approaches, more readable solutions, best practices, code smells, patterns & anti-patterns, coding style, etc. Answer: The code is easily readable and variables are well named. What you could do is have functions instead of lambdas, but that is just my personal preference. Also "wordSplit" is a function and would be a bit better named splitIntoWords(text) {...} From the reuse and maintenance perspective the wordSplit fells like it is a little misplaced. It is not a major difference, but it makes countWords more reusable: function wordFrequency(text = '', topn) { const words = splitIntoWords(text) const wordCount = countWords(words) return Object.entries(wordCount) .sort((a, b) => b[1] - a[1]) .slice(0, topn) } You can also isolate the last line into a separate function and give it a name: function pickNMostFrequent(wordCount, n) { return Object.entries(wordCount) .sort((a, b) => b[1] - a[1]) .slice(0, topn) } function wordFrequency(text = '', topn) { const words = splitIntoWords(text) const wordCount = countWords(words) return pickNMostFrequent(wordCount, topn) }
{ "domain": "codereview.stackexchange", "id": 43315, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, algorithm, programming-challenge, strings, interview-questions", "url": null }
javascript, algorithm, programming-challenge, strings, interview-questions CountWords is the only place that feels awkward. What you can do to simplify it is the following (Yes, you can use nullish assignment): function countWords(words) { return words.reduce((wordCount, word) => { wordCount[word] ??= 0 wordCount[word]++ }, {}) }
{ "domain": "codereview.stackexchange", "id": 43315, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, algorithm, programming-challenge, strings, interview-questions", "url": null }
python, python-3.x, random Title: Generating random integers in one of three ranges Question: I have a function returns a random integer number. This function only accepts these three numbers 1,2 and 3(there's another function in my code which is make sure of this otherwise that function will raise ValueError and re-prompt the user again). If the function gets 1 it should generate a random number between 0-9. If gets 2 it should generate a random number between 10-99. And lastly, if gets 3 it should generate a number between 100-999. I'm wondering if there is a way in which I don't use all these ifs. My code for now: from random import randrange def generate_integer(level): if level == 1: N = randrange(0,10) elif level ==2: N = randrange(10,100) else: N = randrange(100,1000) return N Answer: An issue I see with this code is that it behaves unexpectedly if passed another value than 1, 2 or 3: it silently accepts any value and assumes anything other than 1 and 2 is 3, which is not what is specified in your description of the code. Invalid/out of range inputs should cause the function to raise an exception. Furthermore, the function should be documented (using a docstring) to indicate what values are accepted. Using python type hints to specify what type of input and output the caller should provide/expect can also be useful. Finally, the logic can be improved by using a lookup table. def generate_integer(level: int) -> int: """Return a random integer in a range specified by the `level` argument: level = 1: [0,9] level = 2: [10,99] level = 3: [100,999]""" bounds = [0, 10, 100, 1000] return randrange(bounds[level-1], bounds[level]) This code will raise a TypeError or IndexError if called with an invalid value. You could add your own checks for type and range of the argument in order to have more descriptive error messages, but in a simple case like this it might not be necessary.
{ "domain": "codereview.stackexchange", "id": 43316, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, random", "url": null }
c++, object-oriented, linked-list, classes Title: UpdateCustomer - implementation in C++ Question: I am trying to write a program that does the following: The application will read the customer and book orders data from a data file “BookOrders.txt” and load them in a customer linked list, each customer object contains an order linked list. The program will allow user to place an order, update an order, cancel an order, checkout orders for one customer and print all customers and their orders. The data file should be updated after each transaction of orders. And well, I'm trying to implement a function that allows me to update customer name (UpdateCustomer(Customer& current);), Here is what I've done so far: void CustomerList::UpdateCustomer(Customer& current) { bool found = false; nodeType<Customer> *location; searchCustomerByNameHelper(current.getCustomerName(), found, location); if (found) { location->info.getCustomerName(); cout << "The customer is updated." << endl; } else { cout << "The customer is not found." << endl; } return; } And I was wondering if this is a good way to do it? or if I have a better option that I could apply? Here is the "entire" program (I have added the link below with all the other files (headers): including OrderList.h, Order.h, etc..., just in case): CustomerList.cpp #include <iostream> #include <cstdlib> #include <string> #include <cstdbool> #include "CustomerList.h" using namespace std; ostream& operator<<(ostream& os, const CustomerList& customer) { customer.print(); return (os); } void CustomerList::AddCustomer(Customer& customer) { insertLast(customer); } bool CustomerList::searchCustomerByName(string searchName) const { bool found = false; nodeType<Customer> *location; searchCustomerByNameHelper(searchName, found, location); return (found); }
{ "domain": "codereview.stackexchange", "id": 43317, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, object-oriented, linked-list, classes", "url": null }
c++, object-oriented, linked-list, classes searchCustomerByNameHelper(searchName, found, location); return (found); } void CustomerList::searchCustomerByNameHelper(string searchName, bool found, nodeType<Customer>* &current) const { found = false; current = first; while (current != nullptr && !found) { if (current->info.checkCustomerName(searchName)) { found = true; } else { current = current->link; } } } Customer CustomerList::getCustomerByName(string _name) const { Customer n; return n; } void CustomerList::UpdateCustomer(Customer& current) { bool found = false; nodeType<Customer> *location; searchCustomerByNameHelper(current.getCustomerName(), found, location); if (found) { location->info.getCustomerName(); cout << "The customer is updated." << endl; } else { cout << "The customer is not found." << endl; } return; } void CustomerList::UpdateDataFile(ofstream& file) { nodeType<Customer> *current = first; while (current != nullptr) { file << current->info.getCustomerName() << endl << current->info.getAddress() << endl << current->info.getEmail() << endl; current = current->link; } return; } CustomerList.h #ifndef CUSTOMERLIST_H_INCLUDED #define CUSTOMERLIST_H_INCLUDED #include <iostream> #include <string> #include <cstdlib> #include "LinkedList.h" #include "Customer.h" using namespace std; class CustomerList : public linkedListType<Customer> { friend ostream& operator<<(ostream& os, const CustomerList& customer); public: void AddCustomer(Customer& customer); bool searchCustomerByName(string searchName) const; void searchCustomerByNameHelper(string searchName, bool found, nodeType<Customer>* &current) const; Customer getCustomerByName(string _name) const;
{ "domain": "codereview.stackexchange", "id": 43317, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, object-oriented, linked-list, classes", "url": null }
c++, object-oriented, linked-list, classes Customer getCustomerByName(string _name) const; void UpdateCustomer(Customer& current); void UpdateDataFile(ofstream& file); }; #endif // !CUSTOMERLIST_H_INCLUDED Customer.h #ifndef CUSTOMER_H_INCLUDED #define CUSTOMER_H_INCLUDED #include <string> #include <cstdbool> #include "Order.h" #include "OrderList.h" using std::string; using std::ostream; class Customer { friend ostream& operator<<(ostream&, const Customer&); private: string name; string address; string email; OrderList orders; public: Customer(); Customer(string _name, string _address, string _email, OrderList _orders); OrderList getOrders(); void AddOrder(Order arg); void UpdateOrders(string arg, int n); void CancelOrder(string arg, int n); string getCustomerName(); string getAddress(); string getEmail(); double checkoutOrders(); bool operator==(const Customer& n) const; bool operator!=(const Customer& n) const; bool checkCustomerName(string _name); }; #endif // !CUSTOMER_H_INCLUDED Customer.cpp #include <string> #include <cstdbool> #include "Customer.h" using namespace std; ostream& operator<<(ostream& os, const Customer& obj) { return (os); } Customer::Customer() : name(""), address(""), email("") { } Customer::Customer(string _name, string _address, string _email, OrderList _orders) : name(_name), address(_address), email(_email), orders(_orders) { } OrderList Customer::getOrders() { return (orders); } void Customer::AddOrder(Order arg) { return; } void Customer::UpdateOrders(string arg, int n) { return; } void Customer::CancelOrder(string arg, int n) { return; } string Customer::getCustomerName() { return (name); } string Customer::getAddress() { return (address); }
{ "domain": "codereview.stackexchange", "id": 43317, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, object-oriented, linked-list, classes", "url": null }
c++, object-oriented, linked-list, classes string Customer::getAddress() { return (address); } string Customer::getEmail() { return (email); } double Customer::checkoutOrders() { return (0.0); } bool Customer::operator==(const Customer& n) const { return (name == n.name || address == n.address || email == n.email); } bool Customer::operator!=(const Customer& n) const { return (name != n.name || address != n.address || email != n.email); } bool Customer::checkCustomerName(string _name) { return(name == _name); } Link: https://github.com/Jrchavez09/Book_Store_application
{ "domain": "codereview.stackexchange", "id": 43317, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, object-oriented, linked-list, classes", "url": null }
c++, object-oriented, linked-list, classes Link: https://github.com/Jrchavez09/Book_Store_application Answer: First thing I'd say is that it will help document your code if you make function arguments const references when you aren't changing them. It might also help performance, but thats normally less important with modern machines :). If you have a using namespace std; at the top of your code please remove it. It defeats the purpose of namespaces. It is always better to use std::string rather that string. It will be less pain further down the line. However I have noticed you are formatting the code a certain way, return values on a separate line, java style braces, bracketed return values, etc, so string might be part of your setup. Its really trivial but why has CustomerList::AddCustomer() got a capital A? (also the U in update is capitalized) Consistency helps readability. IN MY OPINION (and I'm not always right) I would say that you could implement UpdateCustomer to be clearer, easier to read and maintain. The way you have done it looks 'odd', you are using out arguments rather than the function return value. Out variables tend to produce bulkier code but it doesn't mean they are bad idea. CustomerList::searchCustomerByNameHelper(): I think this function should be private function called findByName. nodeType<Customer>* findByName (const string& searchName) { nodeType<Customer>* iCurrent = first; while (iCurrent != nullptr && !iCurrent->info.checkCustomerName(searchName) ) { iCurrent = iCurrent->link; } return iCurrent; // Will be nullptr if the name is not found. }
{ "domain": "codereview.stackexchange", "id": 43317, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, object-oriented, linked-list, classes", "url": null }
c++, object-oriented, linked-list, classes This means that addName() can check to see if the name already exists, because that will cause issues with your code as it is. searchByCustomerName (which should really be called doesNameExist) could be stripped down to just one line: bool CustomerList::searchCustomerByName(const string& searchName) const { return (nullptr != findByName (searchName)); } bool CustomerList::getCustomerByName(const string& _name, Customer& outValue) const { const nodeType<Customer>* pResult = findByName(searchName); if (pResult) outValue = *pResult; // else Should really empty the outValue. return (pResult != nullptr); } bool CustomerList::UpdateCustomer(Customer& current) { const nodeType<Customer>* pResult = findByName(current.getCustomerName()); if (pResult) { // Do your update } return (pResult != nullptr); }
{ "domain": "codereview.stackexchange", "id": 43317, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, object-oriented, linked-list, classes", "url": null }
c#, json Title: Json Array to ND Json converter Question: Problem We have a tool which produces output as json array We have another tool which anticipates input as ndjson (newline delimited json) I need to write a converter to transform json array to ndjson Sample json array [ { "Property1": true, "Property2": 0.1, "Property3": "text" }, { "Property1": false, "Property2": 0.2, "Property3": "not text" }, { "Property1": true, "Property2": 3.14, "Property3": "sample" }, { "Property1": false, "Property2": -51.0, "Property3": "Property3" } ] Sample ndjson {"Property1":true,"Property2":0.1,"Property3":"text"} {"Property1":false,"Property2":0.2,"Property3":"not text"} {"Property1":true,"Property2":3.14,"Property3":"sample"} {"Property1":false,"Property2":-51.0,"Property3":"Property3"} Design Keep the converter as simple as possible The converter should receive a source and a target file paths The converter should not parse the data just reformat it The converter should perform only some basic preliminary checks Solution I'm comfortable to implement this with Json.NET But I wanted to practice with System.Text.Json So, I've decided to implement it with the latter I've added some comments to the code to help the reviewers public static class JsonArrayToNDJsonConverter { static JsonWriterOptions writerOptions = new() { Indented = false }; public static void Convert(string sourcePath, string targetPath) { //Preliminary checks on source if (!File.Exists(sourcePath)) throw new FileNotFoundException("Source file is not found"); using var sourceFile = new FileStream(sourcePath, FileMode.Open); var jsonDocument = JsonDocument.Parse(sourceFile); if (jsonDocument.RootElement.ValueKind != JsonValueKind.Array) throw new InvalidOperationException("Json Array must be the source");
{ "domain": "codereview.stackexchange", "id": 43318, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, json", "url": null }
c#, json //Not that graceful handling of file existence if (File.Exists(targetPath)) File.Delete(targetPath); using var targetFile = new FileStream(targetPath, FileMode.CreateNew); using var targetFileWriter = new StreamWriter(targetFile); using var jsonObjectWithoutIndentation = new MemoryStream(); foreach (var jsonObject in jsonDocument.RootElement.EnumerateArray()) { //Write json object without indentation into a memorystream var jsonObjectStreamWriter = new Utf8JsonWriter(jsonObjectWithoutIndentation, writerOptions); jsonObject.WriteTo(jsonObjectStreamWriter); jsonObjectStreamWriter.Flush(); //Write memorystream to target var singleLinedJson = Encoding.UTF8.GetString(jsonObjectWithoutIndentation.ToArray()); targetFileWriter.WriteLine(singleLinedJson); //Reuse memory stream jsonObjectWithoutIndentation.Position = 0; jsonObjectWithoutIndentation.SetLength(0); } } }
{ "domain": "codereview.stackexchange", "id": 43318, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, json", "url": null }
c#, json Answer: always increase readability by using access modifiers. Convert is misleading, as you're reading and writing files, so describing the function is very useful to understand what's going on inside the method. File.Delete(targetPath) is not necessary, as you can use FileMode.Create which will create the file if not exists, and override it if exists. use JsonException instead of InvalidOperationException. Utf8JsonWriter should be outside the loop and it's Disposable. You can always use File.ReadAllText or File.WriteAllText instead for small to medium size files, as shortcut, however, for large files, I advice you using the proper streaming techniques (such as reading file in chunks) instead of reading everything. use the proper naming for objects such as memoryStream instead of jsonObjectWithoutIndentation and jsonWriter instead of jsonObjectStreamWriter to avoid confusion. use Flush() and Reset() all together with the Utf8JsonWriter to reset the stream. using Flush() alone would case some exceptions, as there are some stream counters are only reset by the Reset() and not by Flush() like BytesCommitted. you can use FileStream with Utf8JsonWriter directly, you just need to add a new line after each write. here is a revised version that would shows the above points : public static class JsonArrayToNDJsonConverter { private static JsonWriterOptions _jsonWriterOptionNoIndentation = new() { Indented = false }; private static byte[] _newLineBytes = Encoding.UTF8.GetBytes(Environment.NewLine); public static void ConvertJsonArrayFileToNDArray(string sourcePath, string targetPath) { if (string.IsNullOrWhiteSpace(sourcePath)) throw new ArgumentNullException(nameof(sourcePath)); if (string.IsNullOrWhiteSpace(targetPath)) throw new ArgumentNullException(nameof(targetPath));
{ "domain": "codereview.stackexchange", "id": 43318, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, json", "url": null }
c#, json if(!File.Exists(sourcePath)) throw new FileNotFoundException("Source file is not found"); var sourceFile = File.ReadAllText(sourcePath); var jsonDocument = JsonDocument.Parse(sourceFile); if (jsonDocument.RootElement.ValueKind != JsonValueKind.Array) throw new JsonException("Json Array must be the source"); using var targetFileStream = new FileStream(targetPath, FileMode.Create); using var jsonWriter = new Utf8JsonWriter(targetFileStream, _jsonWriterOptionNoIndentation); foreach (var jsonObject in jsonDocument.RootElement.EnumerateArray()) { jsonObject.WriteTo(jsonWriter); jsonWriter.Flush(); jsonWriter.Reset(); targetFileStream.Write(_newLineBytes); } } } Though I think it would be better if we just divide all that into several methods and inside one helper class, to give a better reusability. Also, it would be more effective if we just target JsonArray instead of JsonDocument as you're only targting the JsonArray so using JsonNode and JsonArray would be enough. Example : public static class JsonHelper { public static JsonNode LoadFromFile(string filePath) { if (string.IsNullOrWhiteSpace(filePath)) throw new ArgumentNullException(nameof(filePath)); if (!File.Exists(filePath)) throw new FileNotFoundException("Source file is not found"); return JsonNode.Parse(File.ReadAllText(filePath)); } public static void WriteToFileAsNDArray(this JsonArray jsonArray, string targetPath) { if (jsonArray == null) throw new ArgumentNullException(nameof(jsonArray)); if (string.IsNullOrWhiteSpace(targetPath)) throw new ArgumentNullException(nameof(targetPath));
{ "domain": "codereview.stackexchange", "id": 43318, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, json", "url": null }
c#, json // remember this will throw an exception if the target path is invalid // if the file already exists, it'll be overridden File.WriteAllText(targetPath, jsonArray.ToJsonStringNDArray()); } } public static class JsonNodeExtensions { private static JsonWriterOptions _jsonWriterOptionNoIndentation = new() { Indented = false }; private static byte[] _newLineBytes = Encoding.UTF8.GetBytes(Environment.NewLine); public static string ToJsonStringNDArray(this JsonArray jsonArray) { if (jsonArray == null) throw new ArgumentNullException(nameof(jsonArray)); using var memoryStream = new MemoryStream(); using var jsonWriter = new Utf8JsonWriter(memoryStream, _jsonWriterOptionNoIndentation); foreach (var jsonObject in jsonArray) { jsonObject.WriteTo(jsonWriter); jsonWriter.Flush(); jsonWriter.Reset(); memoryStream.Write(_newLineBytes); } return Encoding.UTF8.GetString(memoryStream.ToArray()); } } Usage : JsonHelper.LoadFromFile(sourcePath) .AsArray() .WriteToFileAsNDArray(targetPath);
{ "domain": "codereview.stackexchange", "id": 43318, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, json", "url": null }
javascript, recursion, asynchronous, async-await Title: Asynchronous recursive function for generating a unique Id Question: The code below is my first purpose built asynchronous recursive function. It took me a while to figure out how to write the thing, and I would like a second opinion. It's part of a project for The Odin Project, a simple book log. I don't have the site live yet, I want to implement a few more features first. I was hoping that you all could take a look at it and let me know what I did right and what I did wrong. The purpose is to create a unique ID for a book when it is created, by first generating the possible ID, then checking it against the given array of books currently in play. const randomNum = (minNum, maxNum) => { const min = Math.ceil(minNum); const max = Math.floor(maxNum); return Math.floor(Math.random() * (max - min) + min); }; const letters = 'ABCDEFGHIJKLMNPRSTUVWXYZ'; const generateId = () => { return `${letters[randomNum(1, 24)]}${randomNum(1000, 9999)}`; }; const makeId = async (checkedArray) => { const newId = generateId(); if (!checkedArray.includes(newId)) { return Promise.resolve(newId); } const recursiveResult = await new Promise((resolve, reject) => { if (checkedArray) { resolve(makeId(checkedArray)); } else { reject('makeId failed'); } }); return recursiveResult; }; The biggest hang up that I ran into was where to put the successful result's return statement. I wanted to get this without asking for help, so it took me a couple of days worth of research. I finally found out about how and when to return a Promise.resolve() and with happy abandon I implemented it. And I have to say, figuring it out myself was immensely satisfying! One alternative that I came up with was to use the current date. But I felt that while it worked, it wasn't an ideal solution... Unless it was... const altId = (bookTitle) => { return `${bookTitle.replace(/\s/g, '')}${Date.now()}`; }; Answer: Over engineered This code is over engineered.
{ "domain": "codereview.stackexchange", "id": 43319, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, recursion, asynchronous, async-await", "url": null }
javascript, recursion, asynchronous, async-await Answer: Over engineered This code is over engineered. It is not asynchronous (nothing is pausing execution) The id is too complex for the domain of books (ids in checkedArray) even if that list comes from a DB The random function is needlessly truncating the range values. And there is a bug in the code. UID If you want a GUID (global unique ID) or UUID (Universally Unique ID) unique to all IDs ever made, then it is best to follow a standard algorithm. Note that it is possible for a duplicate ID to be created as there is no way to check it against all GUIDs. It thus relies on statistics, the odds of any ID already existing is vanishingly small. From what I can make out from your question the domain is a database of books. The best way to ensure a unique Id for each book is for the database to store a table that holds an ID source. For each book added you query the ID and increment its value in the DB by 1. This will ensure each new book in the DB will have a unique ID. Even for a 32bit int the number of unique IDs available will easily cover all books likely to be stored in the DB. Async? There is nothing in the code you have presented that requires a promise. You can return the result (an ID or the string 'makeId failed') immediately without the need to create and/or resolve promises Bug? There is a bug in the function makeId. See comment marks in the snippet /* CODE MARK B */ checks if checkedArray exists which suggest that it is possible for it not to exist. However /* CODE MARK A */ requires checkedArray to exist and to have the function includes
{ "domain": "codereview.stackexchange", "id": 43319, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, recursion, asynchronous, async-await", "url": null }
javascript, recursion, asynchronous, async-await /* CODE MARK A */ requires checkedArray to exist and to have the function includes const makeId = async (checkedArray) => { const newId = generateId(); if (!checkedArray.includes(newId)) { /* CODE MARK A */ return Promise.resolve(newId); } const recursiveResult = await new Promise((resolve, reject) => { if (checkedArray) { /* CODE MARK B */ resolve(makeId(checkedArray)); } else { reject('makeId failed'); } }); return recursiveResult; }; Either... B is not required as there will always be an array making the guard redundant, or A is unsafe and will throw an error and reject the id request without the expected reject message 'makeId failed' Either case is unexpected (or inconsistent) behavior and thus considered a BUG. Random I can not see the need for you to truncate the min and max values in the random function. It is unneeded complexity. Rewrite random The following rewrites of randomNum will always return an int. Note that the name randomNum is poor as it returns not a number but an integer. A better name is randomInt const randomInt = (min, max) => Math.floor(Math.random() * (max - min) + min); // or if you only want a positive // using a bitwize OR | to floor will return a positive 32bit int. const randomUint32 = (min, max) => Math.random() * (max - min) + min | 0;
{ "domain": "codereview.stackexchange", "id": 43319, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, recursion, asynchronous, async-await", "url": null }
python, performance, animation, matplotlib Title: Trouble with rendering speed of the animation of a simulation? Question: The following code is a conversion from some old java I wrote to python. It shows the beginnings of a simulation of an ant colony. I am finding the animation speed very slow - and I'm wondering if I am doing something wrong - or if it's nothing to do with the animation, and everything to do with not exploiting the vector methods of numpy (which, in fairness, I struggle to understand). I am aware that there are 5k points being mapped. But I've seen matplotlib demos handling many more than that. I am relatively new to python, matplotlib, and numpy - and do not really understand how to profile. I particularly do not 'get' numpy - nor do I really understand broadcasting (though I read about it in 'absolute_beginners'(. import random from math import sqrt, cos, sin, radians import numpy as np import matplotlib.pyplot as plt from matplotlib import use as mpl_use from matplotlib.animation import FuncAnimation from matplotlib.colors import LinearSegmentedColormap mpl_use("QtAgg") # PyCharm seems to need this for animations. history = 100 # trail size per mover. class Mover: offs = [i for i in range(history)] # used to reference the mover history. def __init__(self, colour, window_limits): # colour = R G B tuple of floats between 0 (black) and 1 (full) r, g, b = colour self.velocity = 0.75 self.ccmap = LinearSegmentedColormap.from_list("", [[r, g, b, 1.0 - i / history] for i in range(history)]) self.mv = 0., 1. # starting unit vector. self.mx = [home_x for i in range(history)] self.my = [home_y for i in range(history)] self.plt = plt.scatter(self.mx, self.my, c=Mover.offs, cmap=self.ccmap) self.normalising = False self.wrapping = True self.w_limits = window_limits self.steer(radians(random.uniform(-180, 180)))
{ "domain": "codereview.stackexchange", "id": 43320, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, performance, animation, matplotlib", "url": null }
python, performance, animation, matplotlib def start(self): x, y = self.mx[0], self.my[0] # copy most recent state. self.mx.insert(0, x) self.my.insert(0, y) if len(self.mx) > history: # remove the oldest state if too long.. self.mx.pop() self.my.pop() def normalise(self): x, y = self.mv mag = sqrt(x*x + y*y) if mag != 1.0: if mag != 0: x /= mag y /= mag else: x = 0.0 y = 1.0 self.mv = x, y def steer(self, theta): # theta is in radians x, y = self.mv u = x*cos(theta) - y*sin(theta) # x1 = x0cos(θ) – y0sin(θ) v = x*sin(theta) + y*cos(theta) # y1 = x0sin(θ) + y0cos(θ) self.mv = u, v if self.normalising: self.normalise() def transit(self): u, v = self.mv self.mx[0] += self.velocity * u self.my[0] += self.velocity * v if self.wrapping: self.wrap() def wrap(self): x, y = self.mx[0], self.my[0] if x > self.w_limits: x -= self.w_limits if y > self.w_limits: y -= self.w_limits if x < 0: x += self.w_limits if y < 0: y += self.w_limits self.mx[0], self.my[0] = x, y def update_plot(self): self.plt.set_offsets(np.c_[self.mx, self.my]) window_limits = 100 home_x, home_y = 50.0, 50.0 wl = window_limits fig, ax = plt.subplots() ax.set_xlim(0, wl) ax.set_ylim(0, wl) mover_count = 50 movers = [] colours = plt.cm.turbo color_normal = colours.N/mover_count for m in range(mover_count): col = colours.colors[int(m*color_normal)] mover = Mover(col, wl) movers.append(mover) def init(): ax.set_xlim(0, window_limits) ax.set_ylim(0, window_limits) return [o.plt for o in movers]
{ "domain": "codereview.stackexchange", "id": 43320, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, performance, animation, matplotlib", "url": null }