subreddit
stringclasses 11
values | text
stringlengths 246
28.5k
|
|---|---|
programming
|
Another thing, slightly off topic but since you are bringing it up I will just mention it. It sounds like you misunderstanding what testing should entail. When testing you aim for 100% code coverage. That doesn't mean every single function must be tested in isolation. It means the tests you write hit all parts of the code.
A lot of the code you would write to get results matching your queryselector example is setters, getters and if statements. You would probably have to write little to no additional testing for it because it would ideally be executed when your other test were run.
|
programming
|
>So add metadata to JSON and it's exactly the same thing
Exactly. If you want to create JSON and serialize/deserialize from HTML feel free to do it. But why?
Here's the point quoted from the post:
>Given there’s a significant cost to adopt the right JSON specification to communicate information between computers and pick the right client, prefer as a sensible default to enhance the website’s HTML and leverage a battle-tested specification that is already there.
You can use JSON, it's just that the Web doesn't have enough specifications, clients and tools to support it. What people end up doing is to create an API structure that is rigid and specific to each website and write code against that structure. Then you see the problems as with the "Sydney latitude".
|
programming
|
Schema. Data should have a defined schema. If the producer defines a schema and then violates it, the producer shouldn't just say, well the consumer shouldn't have relied on the data being there the fault is all on the consumer.
Also if you don't define a schema and just shove meta data anywhere sometimes the data exists sometimes it doesn't. You haven't created a meaningful data-feed, and in fact your haven't defined a contract of behavior so you haven't even created an Interface. You just left breadcrumbs someone might be able to follow if they have an intimate knowledge of the system.
|
programming
|
I guess we agree on the need for a schema. That's a fundamental of how XML works and how we can design clients for it. The context of the post, though, is for those who are used to create JSON APIs without schema/specification and therefore might be well served with just enhancing the HTML markup. Sometimes even if you already know all this, enhancing the HTML markup is a very good scalable solution for certain kinds of clients, like a client that only needs to read information from the website or just simulate what a user can do.
|
programming
|
An example is kinda of hard to show for this, because I would essentially be showing the lack of a test on some specific function then arguing since it is a simple function and it is called in many other tests that pass I can assume it is working as intended as well.
But just for example, say I wanted to create a specific object structure from some other object, and for certain properties I want default values instead of null values.
I create a function buildA(b) this function does the proper null checks and sets all the values. This function is essentially only null checks, and setters. Do I really need to test this function in isolation or is it enough that my other tests which are utilizing this function return expected results?
This is a pretty contrived example. If you want a better Idea I would head to respected projects on github and check out their unit tests.
Integration testing gets a bit trickier because you have to test for side affects as well.
|
programming
|
Yes schema is important, but I feel like your post ignored or missed the fact that putting metadata into html elements and searching for it by attribute isn't honoring any kind of schema. Yes it doesn't violate the html schema, or break the page. Yes if it disappears your client code looking for it won't choke. But you have essentially given them a bunch of client secrets and said go find the data at the end of the rainbow, and at any moment that rainbow might disappear.
It isn't creating an interface because there is no contract of behavior, it is a simple lookup and that API already exists it is called query selector. Don't get me wrong that might be fine for your cases. But it isn't a replacement for an API.
|
programming
|
I'm sorry, there is a communication problem here. Your post showed examples of scraping specific data out of a payload, but now you are talking about using elements to show workflow and indicate where following some element will bring you.
What your comment just described is that your client would need to know there is a cart page, and .prefix-cart will take you there.
So does the client already know the site structure it just wants help navigating it? If this is the case, just give the client a routing table don't put any of those attributes in the html and inspect the urls in the anchor tag against the routing table to find out where it would take you.
I feel like something just isn't getting communicated correctly here. We are both using the wrong language and talking past each other.
|
programming
|
>What your comment just described is that your client would need to know there is a cart page, and .prefix-cart will take you there.
I never said that. I mean that the client needs to know where are the triggers to go to the cart page. A good example is the "a\[rel="cart"\]" attribute. You can just get that element (querySelector("a\[rel='cart-page'\]")) and follow the link without having to know the URL of the cart page.
If you give the client a custom data structure like a routing table it adds more maintenance burden to the whole system. The HTML spec already have a lot of stuff to assist non-visual clients, see [https://www.w3.org/MarkUp/1995-archive/Elements/A.html](https://www.w3.org/MarkUp/1995-archive/Elements/A.html)
>I feel like something just isn't getting communicated correctly here. We are both using the wrong language and talking past each other.
I'm just talking about how the web was intended to work, that's it.
​
|
programming
|
If I give a contrived example it is not because I am advocating to do something that way. It was simply to illustrate a point. No-one would or should write a test that just sets values and does a couple of null checks.
You will have other code that executes that code piece, and as long as your aren't doing weird reflecting or injection on that object, the code executing it passing successfully is enough.
Yes your past post makes sense, that is essentially what I am saying, except with one caveat utility functions sometimes needed to be tested in isolation, since they are essentially a subdomain within a project whose usefulness is in helping your other code pieces succeed.
|
programming
|
I am well aware that html has many tool to assist non-visual clients. I work on sites that have to be 508 compliant. There is a well defined schema for telling those clients how to read html documents.
rel='cart-page' is meaningless unless you know to look for cart-page and what that means. Also it is a misuse of the attribute. rel has a limited range of acceptable values. Really everything you've describe so far should be going in data attribute.
If you are defining what pages exists, and how to know an elements leads to it, then an api that defines a routing table schema for a client to consume is, for most site frameworks, a one off that never needs maintainence. Most frameworks for building website you could easily just define areas of your site as you making them, and generate the routing table on application start it would have the correct urls everytime it started....this is getting a little off topic though.
My point is...the client in this scenario is required to have some knowledge about what cart-page means. Or is the scenario your describing the reverse where it would crawl all pages, and associate the links after based on the rel attribute?
Basically I'm struggling to see how rel='cart-page' or anything similar tells the client anything about the page it is visiting unless it has prior knowledge, and if the idea it to associate that link with the cart page, as the client crawls the site....the normal way to do that would be just let it crawl and on the actual page put that information in a meta tag.
|
programming
|
It's a breeze with any language with good quasiquotation - there is no difference. Just like you're unlikely to use low level list manipulation in Lisp (cons, car, cdr, ...) when you can use quasiquotation, you won't need to go into AST implementation details in more complex languages either.
Scala and Rust, unfortunately, do not provide really symmetric quasiquotation, and their macro expansion is inferior to Common Lisp. But, for a language with the same kind of macro expansion semantics (including lexically scoped macros and an ability to control the expansion order), all the same stuff you're used to in Common Lisp is still applicable. Have a look at the [examples](https://github.com/combinatorylogic/mbase/blob/master/src/l/lib/pfront/extensions.hl) of Lisp-style macros implemented as syntax extensions (never mind that there is a Lisp underneath...).
There is one huge issue I have with metaprogramming on top of raw S-expressions: location metadata. There is simply nowhere to stick it in. You're losing it quickly, so your macros cannot provide nice and clean error messages.
|
programming
|
> I wonder why you've chosen a lisp underneath...
Just one simple reason - to be able to grow a language incrementally, from scratch.
Otherwise, if you already have a sufficiently powerful language, you can jury rig macros on top of it, without ever having to go through Lisp.
> Still, why anything different from S-expressions hasn't caught on in practice for code juggling?
Inertia. Most people who know what to do with macros are coming from the Lisp culture, so they'll naturally stick to Lisp.
There were actually alternative streams of work - see Nemerle and Converge for example.
|
programming
|
Not sure if you’ve heard of it, but (Haxe)[https://Haxe.org] has a macro system which is implemented as a special runtime with full access to the standard library and the AST as generated by the compiler. It’s a feature built in to the compiler.
It’s an incredibly powerful system when combined with other compiler features, like the completion server (which provides auto completion data for IDEs and such). One of the samples on the Haxe site shows how you can add google search results to any IDE that implements autocompletion using the compiler services.
So with that you get to use a modern and comfortable statically typed OOP language (which supports cross compiling) while you do meta programming stuff, all without damaging your parenthesis keys.
|
programming
|
Everyone knows the semi-colon key is reinforced by manufacturers on standard keyboards.
[Generic functions](https://haxe.org/manual/type-system-generic.html)
> can you invoke the compiler programmatically at run-time to optimize just-in-time generated code?
Probably... Haxe doesn't have its own runtime, it's a language specifically designed to be cross-compiled, like Typescript ([except it supports more than just Javascript.](https://haxe.org/documentation/introduction/compiler-targets.html)).
So where/how your code runs depends entirely on the target. If you are targeting an interpreted language like Javascript, it should be trivial to generate some Haxe code and run it through the compiler. You may even be able to generate an AST and pass that to the compiler directly rather than having to mess with the syntax of the language (although AFAIK the compiler doesn't currently have that feature).
The hard part I guess would just be to manage reloading the code after building it, as each target would have different requirements. The C++ target would be a pain in the ass because compilation is so slow. Totally possible though!
|
programming
|
Sorry, I probably didn't provide the context, didn't mean that kind of
generic functions but rather multiple parameter dispatch methods as in
[CLOS](https://en.wikipedia.org/wiki/Common_Lisp_Object_System).
​
Look, Haxe is cool and interesting thing in its own right. But it's
at the mercy of the host and if your host is not Common Lisp, you
can't provide portable run-time invocation of (host) compiler. And I
don't mean run-time interpretation like eval, but a real compiler with
optimizations and native code generation (and which is fast, as is the
case for CL compilers - in part because they can be invoked to compile
a single jit generated function).
|
programming
|
The point of this survey is supposed to be how much a developer should make. Two developers, both with equivalent skills, would be expected have different salaries if one programmed while managing a team and the other had no other duties.
Same with system architects. No one would question that that's a development role, but that's often a highly-paid role compared to a standard developer, even though they both might have the same laundry list of skills. The closest thing to "system architect" in this calculator is "full-stack developer". Whatever the hell that really means, since it could cover dozens of different skills.
|
programming
|
>Their list of roles is laughably myopic and obviously biased towards web,
Honestly that's about representative of most programming gigs when talked about in the net, so maybe that part is accurate :(.
It's a bit odd to me. Networking is supposed to be extremely effective, but how to people in more "niche" (if we're calling anything outside of mobile and web "niche") domains even find each other to begin with?
|
programming
|
Hi guys. Just wanted to share some of the work we've been doing at Reddit on the iOS application. As much as this is, this is just a sample of a lot of the other work we're doing. I hope you guys find this interesting and informative to see how an app with millions of users is built and maintained.
If you're interested in hearing a more in-depth audio interview of this work you can find it here: [http://insideiosdev.com/evolving-mobile-architecture-at-reddit](http://insideiosdev.com/evolving-mobile-architecture-at-reddit)
|
programming
|
“The terminology has been a point of contention in the tech community for nearly two decades and now it was just removed from one of the most popular programming languages in the world.”
Can anyone remember *any* controversy about this terminology in the last 20 years? Because I’ve been in this industry longer than that, and cannot remember a single conversation, blog post, tech journal story, user group discussion where this ever came up. I call bullshit.
This entire bugaboo has been fabricated out of whole cloth by power-seeking leeches who are determined to undermine open source as they have done in so may other areas of society. It will not stop here. They will move on to the next hill of offense, and will not stop until the entire we forget what this project is for, and whom it serves.
|
programming
|
Primary/replica does a **horrible** job as a replacement to master/slave in a lot of cases.
For example, a database replica is an exact, 100% copy of the master database. That's what the word replica means -- an exact copy of something else.
Master/slave very frequently has nothing to do with replication -- it's commonly used as the master being source of truth, the tie-breaking voter, or the entity that controls or otherwise orchestrates the slaves, etc.
|
programming
|
Old timers sometimes hate change and see it as an affront to them personally. Change for the sake of change is almost certainly bad, but if two words ignite bad feelings in part of the community then changing them surely isn't the death of the python programming language.
Or maybe it is, what do I know. The whole thing is funny to me, but then again my ancestors weren't bought and sold like cattle within the last 200 years.
|
programming
|
Eh, it's just terminology. It's been around forever, at least since IDE drives were designed and I'm sure long before that in CS.
I can see where it can make people uncomfortable. Words have meaning. Besides, the powers that be change terms regularly. Things like the industry name itself going from Data Processing to Information Technology to Information Science to Data Science to all sorts of other ones in the middle that I've long forgot.
|
programming
|
It's a gigantic waste of time and resources for completely irrational reasons, and accomplishes nothing. That's the problem. It's sole purpose is controlling the behavior of other people.
It should never be anyone else's problem that someone is irrationally offended.
On the other hand, refusing to change accomplishes great things: It shuts down these idiots and prevents them from moving on to the next irrational nonsense. The world shouldn't have to change because a handful of people with mental illness can't handle the words "master/slave".
|
programming
|
Changing a couple lines in documentation using find all occurrences is so hard? Wew. Of all the fields that need to change their vocabulary we are probably most capable of updating.
It accomplishes making certain parts of the community more comfortable. I'd think it's very rational to be offended at something that happened to your ancestors < 4 lifetimes ago and which often occurs as human trafficking today.
They can handle it. It's not about that. It's about creating a more comfortable atmosphere in a field that is predominantly white guys.
|
programming
|
>Besides, the powers that be change terms regularly.
Who are "the powers that be"? Textbooks will continue to use master/slave. Some languages will still continue to use master/slave, while others come up with their own obscure replacements. In most cases, those new terms are *less* descriptive, and even straight up confusing. It makes communicating about these topics super difficult if you and me are trying to discuss the same topic, yet we use completely different terminology to describe it, and that terminology doesn't even make sense in the context of our conversation.
|
programming
|
> completely irrational reasons
This is a huge fucking understatement. Might as well remove the words from the English language entirely, because it's no less offensive for them to appear in the dictionary than it does for them to appear in the source code of a programming language.
> It should never be anyone else's problem that someone is irrationally offended.
Couldn't agree more. And my guess is that this is the complaint of a VERY small number of ultra thin-skinned people who probably make their work environments hostile for everyone else because of how easy they are to offend.
*Out of principle* we should not be catering to such a thin-skinned vocal minority like that. At all.
Why? [Because giving in to people like that has real, damaging consequences for others](https://arstechnica.com/tech-policy/2013/03/how-dongle-jokes-got-two-people-fired-and-led-to-ddos-attacks/).
|
programming
|
> It accomplishes making certain parts of the community more comfortable.
The obligation is on them to change and deal with it, not the rest of the world. I hate the song "Safety Dance" by Men Without Hats, but I don't insist that the world change for my benefit.
> I'd think it's very rational to be offended at something that happened to your ancestors < 4 lifetimes ago and which often occurs as human trafficking today.
Except that using these words has ZERO to do with human trafficking today, or what happened to their ancestors in the far past.
They are words, no more, no less, and they happen to accurately describe this technical situation.
The only thing that matters is someone's intent to offend. It NEVER matters if someone's offended for irrational reasons. It's their problem to change their own behavior and deal with other people. Otherwise, it never ends. The only reason these people are offended is so they can have a power trip and control other people. NOTHING is accomplished by giving in to irrationality and a huge amount of harm is done.
|
programming
|
I really don't see the connection to open source here. Indeed I'd argue this kind of stuff is perfect for open source.
Changing some words after a reasonable issue being posted is kowtowing? I guess I've been confusing normal courtesy with kowtowing all my life.
If they were asking for architectural changes *without reason*, sure. But this is a mild inconvenience for us, and a reasonable if small increase in QoL for others (especially maybe someone recovering from modern human trafficking, let alone African Americans at large).
I'm not missing the point at all. I'm arguing that the changes are *so* simple that changing them for even a couple people (not, ya know millions of people) is reasonable and shows empathy and awareness of history and sociological realities.
|
programming
|
If I asked you in a technical interview to tell about a time you had to deal with an insanely difficult challenge in programming and you said that some languages use the term master/slave and others use the term parent/child for the same thing, I would think you were a complete moron. Anyone who thinks this is a genuinely difficult problem to overcome needs to find an easier profession/hobby that doesn't involve constant change and learning new concepts and terms.
|
programming
|
>that doesn't involve constant change and learning new concepts and terms.
Except we *don't* need to learn "New terms". Some terms are changing, some are the same, and things are becoming less internally consistent. Have you read the discussion behind these python changes? The words "Master" and "Slave" are still used numerous times, and the devs have said they won't change. It only makes programming and python less consistent, and confuses new learners. If everyone agreed on these changes I'd be fine, but even *just python* can't agree on new terminology.
Change for the sake of change helps no-one. Trying to insult my intelligence because I won't needlessly stop using a word like "Master" because someone claims it's *only* related to slavery is ridiculous. Does that mean I'm now offending people if I refer to someone as a "Master craftsman"?
|
programming
|
No because they are always used to describe something so the fact they are used for many things is not an issue. Good and bad are used for many things but it is accepted that what makes a car good and what makes a bridge good are very different. Nobody is confused about that. It does not lend any clarity. This push is done by people who are addicted to *feeling* like they are improving the world but they are too lazy to do anything that actually helps people so they change words in open source projects.
|
programming
|
> The obligation is on them to change and deal with it, not the rest of the world. I hate the song "Safety Dance" by Men Without Hats, but I don't insist that the world change for my benefit.
Why? There are more slaves today than there are developers. It’s not the rest of the world changing, it’s a relatively minuscule community of people who are *constantly* evolving their vocabulary anyway.
>Except that using these words has ZERO to do with human trafficking today, or what happened to their ancestors in the far past.
Human trafficking is often just a euphemism for slavery. Just look up human trafficking victims 2018 and you’ll see slavery, forced labor, sexual exploitation. Slave isn’t in great company there. It stems directly from their use in the past, and refers to the same notion, if not the same practice, today.
>Otherwise, it never ends.
No it does end. There are gradations here. Language evolves, *constantly*. What is so wrong with changing a couple words. Even 10 words. There is nothing irrational about the history of language and its use for oppression, and there’s nothing irrational about changing some words in documentation on the reasonable assumption it might make a large population of potential devs more comfortable.
|
programming
|
I never said you needed to stop using the word master, and I never said that these changes are a good idea or help anyone. My objection was to your claim that these kinds of changes cause any sort of task related to programming in Python to be "insanely difficult". These are minor changes that are at most a minor inconvenience. You are insulting your own intelligence when you claim that confusion around these terms will actually cause you any problems in real life.
|
programming
|
Power seeking leeches determined to undermine OSS... Yikes.
The only possible problem I see is that now there are multiple terms for the same thing - but that happens all the time in programming anyway. Every other language has their own term for a package or library.
There is no grand conspiracy to destroy OSS. Changes like these _really_ aren't unreasonable.
This is _not_ censorship. It's not "you can't say those words," it's "we have better, less provocative terminology, so we should use that instead".
Let's resolve real problems before worrying about an imaginary "SJW agenda."
|
programming
|
>My objection was to your claim that these kinds of changes cause any sort of task related to programming in Python to be "insanely difficult".
Again, go read the public discussion yourself. Some of the top developers of Python said that lots of occurences won't be changed, because it would require a lot of work to re-write large portions of the language. Do your research before talking out of your ass.
These aren't minor changes, it requires rewriting parts of code that are at the core of how Python functions. This isn't just search and replace, jeez.
|
programming
|
In the US that’s down to 35, and down globally. Regardless mining wasn’t primarily inflicted on others based on their race, and isn’t a practice anathema to freedom, democracy, etc. Child labour is wrong, you’re right. We don’t name things child labor, or rape, or murder (we use kill, but that has a lesser connotation than murder and is also something which effects uniformly).
Mining is a job, with the possibility of safety, providing for a family, and most importantly freedom, etc. Slavery isn’t.
|
programming
|
Here's the problem: someone who takes offense at the words Master/Slave because of their connotations in recent *American* history is too narrow-minded and self-centered to deserve any decision-making power.
Expressions referencing slavery are part of the vernacular of every major language because slavery was familiar to every major society in human history. If slavery evokes visions of enslaved Africans in the American South in the antebellum era, feel free to picture enslaved Northern Europeans in the Roman era, or enslaved Jews in the biblical era. If you're concerned about this terminology normalizing or whitewashing the horrors of slavery in the US, you're being a pompous American imperialist who assumes ours is the only history that matters.
Using 'Slave' in figures of speech and programming vernacular is not going to normalize or excuse slavery any more than the even more prolific use of "Kill" in programming lingo is normalizing murder.
I agree that slavery is awful.
I agree that it can be unpleasant to have something so dark baked into our language.
I agree that changing the wording is not that big of an inconvenience.
But I think that the people pushing for this change are ignorant pricks who barely believe their own nonsense and really just like bossing people around. I don't want to enable that kind person.
|
programming
|
> Here's the problem: someone who takes offense at the words Master/Slave because of their connotations in recent American history is too narrow-minded and self-centered to deserve any decision-making power.
Why is it not the opposite?
Different races doesn’t make it right, and I have made repeated mention in other comments of modern global slavery which these days often affects Asians.
Also I don’t think there’s much bossing happening here. He made a request to a public forum. Many other contributors agreed with him. It’s not just a couple people taking over open source software, it’s the maintainers and the community who say, "not a bad idea". It’s another PR/issue like any other QoL change.
|
programming
|
> How does changing two words with plenty of synonyms undermine "open source"?
You're completely missing the point: The controversy has nothing to do with words, or synonym scarcity.
The whole problem is the concept of kowtowing to pressure groups whose motivation and goals are completely unrelated to, and often contradictory to, the goals of programming, and open source.
If someone's suggestion doesn't improve the programming language, tell them to get the fuck out.
|
programming
|
Political correctness has its place, but this isn't one of them.
In something so trivial as long-established terminology, that's explicitly unrelated to the derogatory or racial connotations they're subjectively correlated with, it makes absolutely zero sense to sacrifice anything of significant value (including time or maintenance overhead) solely for the sake of idealism which _hinders_ the progress of these projects.
Some people I know feel strongly about racial or gender-oriented issues that are very controversial today. For the most part, I lean toward the side of equality in this regard and believe in treating people with respect regardless of orientation, gender identity, race, etc. and my own personal opinions toward them.
I genuinely question whether or not being transgender is really a solution to the identity issues that transgenders have, for example.
If they decide to discuss that with me and are OK with me explaining my views, I'll respectfully convey my views on that.
But I also understand that what I hold is taken to be among others as an opinion and nothing more, for better or worse.
What I don't believe in, though, is applying purely subjective meaning and idealism to things which aren't demonstrably perpetrating the ideas that the idealism claims to be against.
|
programming
|
both clang and gcc do this :
$ cat foo.c
typedef const char* cstring;
cstring x = 123;
$ gcc -c foo.c
foo.c:2:13: warning: initialization of ‘cstring’ {aka ‘const char *’} from ‘int’ makes pointer from integer without a cast [-Wint-conversion]
cstring x = 123;
^~~
$ clang -c foo.c
foo.c:2:9: warning: incompatible integer to pointer conversion initializing 'cstring'
(aka 'const char *') with an expression of type 'int' [-Wint-conversion]
cstring x = 123;
^ ~~~
1 warning generated
|
programming
|
I understand, but one of the cornerstone of programming is "DRY" (Do Not Repeat Yourself). I gave a particular example, but you can generalize it other cases. For instance, the Windows API does lots of this and uses lots of typedef aliases for simplifying and making easier to read the code. For instance, LPVOID is void\*, LPCSTR is const char\* when unicode is not defined, LPCWSTR is cosnt wchar\_t\* (Unicode 16 bits) and so on. The code should not be cryptic or hard to read as human time is expensive and lots of studies have shown that developers spend a lot of their time reading code.
&#x200B;
By using type aliases you can replace the type if it is no longer suitable, the OpenGL C-API also does it as the type GLFloat which is an alias to float can be changed depending on the compiler settings, architecture or future need.
|
programming
|
Ok. There a lots of cases where type aliases can make the life better and code easier to read and change.
Another good case to use type alias if function pointer.
For instance:
`// Cryptic`
`double findRoot(double (*mfun) (double), double a, double b){`
`....`
`}`
&#x200B;
`tyepdef double (*MathFun) (double);`
`or`
`using MathFun = double (*) (double);`
`// More readable`
`double findRoot(MathFun fn, double a, double b){`
`....`
`}`
Another legitimate situation to use type alias for pointer is when passing (void\*) opaque pointers to pass around C++ objects in a DLL or C-structs and you cannot mix them.
Other example where you can find lots of typedefs is the the Win32 API Windows API which uses LPCSTR for const char\* and LPCWSTR for const wchar\_t\* or LPCVOID for void\*. LPCSTR communicates the intent to the reader that the function expects a NULL terminated string.
&#x200B;
|
programming
|
>I understand, but one of the cornerstone of programming is "DRY" (Do Not Repeat Yourself).
Writing `const char *` instead of `LPCSTR` is not repeating yourself. If you want to avoid repeating yourself, you are looking for type inference, not type aliases.
>For instance, the Windows API does lots of this and uses lots of typedef aliases for simplifying and making easier to read the code. For instance, LPVOID is void*, LPCSTR is const char* when unicode is not defined, LPCWSTR is cosnt wchar_t* (Unicode 16 bits) and so on.
These are **awful**. They really are. The Window API is notorious for being one of the most awful APIs to read code from, ever. That's not just the type aliases of course, there are lots of other issues like the many functions that have half a dozen mandatory NULL parameters, but it's part of it.
>The code should not be cryptic or hard to read as human time is expensive and lots of studies have shown that developers spend a lot of their time reading code.
`const char *` is not difficult to read. It simply isn't. It's extremely common in C code. If that's something you have difficulty reading, you are a new and very inexperienced C programmer.
>By using type aliases you can replace the type if it is no longer suitable, the OpenGL C-API also does it as the type GLFloat which is an alias to float can be changed depending on the compiler settings, architecture or future need.
No you can't. Doing so would break ABI compatibility, which is unacceptable. The OpenGL `GLfloat` type is not going to change. It will always be a 32-bit single-precision floating-point number. Changing it would break *hundreds of millions of lines of code*.
The Windows API has a `WORD` typedef. The idea was that it would change to whatever the natural word size of the processor is. So on 16-bit platforms it was a 16-bit integer, but would be a 32-bit integer on a 32-bit version of the API and a 64-bit integer on the 64-bit API. Needless to say, it's still 16-bit and always will be.
|
programming
|
Well it's simpler relative to SGML, of which XML is the subset disallowing tag-inference (tag omission such as in HTML) and other short forms, as a well as custom Wiki syntaxes (short references), stylesheets (SGML LINK), and a whole bunch of other things.
Though arguably, by not supporting these authoring-oriented features of SGML, XML makes an ok delivery format, but certainly not a great authoring format. Which is why we're using markdown and other Wiki syntax for actually writing text when SGML had integrated Wiki syntax parsing and conversion to angle-bracket markup over 30 years ago. But since XML (XHTML) also hasn't replaced HTML as delivery format on the Web (its original stated goal), it shouldn't be considered a "Web" standard anymore (only full SGML can parse HTML based on a formal standard). Consequently, W3C has [winded down its XML activity recently](http://lists.xml.org/archives/xml-dev/201807/msg00025.html). The remaining bits of XML on the Web are for SVG and MathML, and W3C's SVG WG has recently published an [SVG 2 candidate recommendation](https://www.w3.org/TR/2018/CR-SVG2-20180807/) that essentially *removes* features from SVG 1.1 and 1.2 that weren't supported by browsers anyway (which while disappointing and frustrating to those who worked on it is still an ok result for the SVG WG given that browser vendors indicated they don't want to work on more SVG features).
|
programming
|
It mandates *commas*, not colons. But it says nothing about number format (e.g. decimal separator). CSV files written by Excen in one country can be read incorrectly by Excel in another country when one uses comma as field separator and dot as decimal separator, while the other uses semicolons and commas, respectively.
Interesting how they intend the presence of a *header* and the *encoding* to be specified in a MIME type parameter instead of in the file itself.
|
programming
|
No it isn't sane in and by itself. It's an artifact of XML being specified as a strict SGML subset, and the deliberate choice of the XML designers to not support SGML end-tag minimization. In SGML, you can just use
<numberOfAxes>6</>
or even just
<numberOfAxes>6
(with the proper declaration allowing end-tag omission for `numberOfAxes` in place) to express the same logical document. Or you could use SGML short references and other more exotic minimization features to change the appearance of the `numberOf` element more drastically.
Also, SGML has *concurrent markup* (not widely used, though). Concurrent markup allows things such as
<(DTD1)p>And the Lord said,
<(DTD2)q>Read my lips: Do not murder.</(DTD1)p>
<(DTD1)p>Be nice to each other instead.</(DTD2)q>
And the people said "Amen."</(DTD1)p
(*taken from <http://xml.coverpages.org/DeRoseEML2004.pdf>*) where text can be tagged in an overlapping fashion such as would be useful for marking up poetry, speech, and eg. postal addressing data.
|
programming
|
> Guess what - all the comments ever made by the self confessed retard are offtopic, everywhere.
All the comments of you being derogatory and insulting users are off-topic as well.
You make zero sense and have no argument or leg to stand on. Try again.
The thing that differentiates between you and me is: You get pleasure of being rude and insulting other users 24/7. I get pleasure by being retarded. Big difference.
Now go repent and speak to a counselor.
|
programming
|
Well the CoC itself is fairly harmless. Its a bit to over specific for my taste since that's going to spawn issues a la "please also include my specific characteristic" at which point it becomes a maintenance load.
The single biggest problem with it is the CoC maintainers behavior wherein they infer a grand signoff of all their political kinks from maintainers including their CoC.
The contents of it are cool, i just wished the maintainers would stop pretending to be lead figures in a revolution backed by everyone using their CoC template.
|
programming
|
Is that true? Let's find out! Comments from the linked thread:
> * Linus has gone full tumblr reddit SJW basedboy libtard bugman cuck
> * he was always a SJW, except instead of getting triggered by the patriarchy he was getting triggered by some freetard shit
> * Fuck! Linux is now cucked and castrated. How will we ever recover?
> * He apologizes for his autism, despite the fact he'll always be as autistic as ever. What a pointless expression of nothing.
> * Next up: Linus suggests to the Linux Foundation to route money to outreach programs for n\*\*\*\*rs and tra\*\*ies
> * Everything in tech is so cucked and SJW nowadays. What do we do now?
> * Yep, we're fucked. It's done. Over. Literal blue hair tra\*\*y shit has made its way into Linux.
Made it through about a quarter of the thread, but it's more or less the same three or four sentiments repeated through to the end.
|
programming
|
Fairly harmless? There are fairly harmless CoCs like the one used by the Ruby community, or even Django's, but the "Contributor Covenant" is not.
It focuses heavily on the appearance of collaboration, and bans behavior based on what some third party may find offensive, rather than focusing on the intent of the speaker, and the context in which the interaction happened. The list of "unacceptable" behavior is open to interpretation too, despite that one of the main arguments pro-CoC is that adopting one would reduce friction when people from different cultures interact.
That wouldn't be that bad, boring, maybe. Given that what to some people is friendly banter, to other people watching may be insulting. Or talking about diets, which may be offensive to people who are fat. I mean, just avoid talking about anything other than development in the context of the project, or in project related channels (like an IRC off-topic channel)... but that's not possible as the scope also includes public spaces, and is also open to interpretation. So if you say something unpopular on Twitter, or in a public forum, then you could be infringing in the CoC. It has happened a few times already.
Finally, why all that confidentiality when someone reports something? In all countries with rule of law that I know of, when there's some problem and someone takes into court (to be judged by a third party, which would be the TAB in Linux's CoC) some issue, litigants are public, the hearings are public, and results are public too. But here it's not, you expose anyone to some vaguely defined anonymous judgement, and expect me to believe that its contents "are cool"? They aren't.
|
programming
|
I think it's a mistake to take 4chan seriously. With that site being as anonymous as it is, you don't have fixed identities on the posters, and you don't know who is saying what, and how seriously. These posts are like the chaos of elementary particles coming out of nowhere and disappearing back into nothingness, at best surface-level reactions to whatever is happening elsewhere, and specific kind of surface-level reactions of that which are only permissible on site like 4chan to begin with.
Consider them the fools to kings; just idiots babbling with equal measures of truth and madness.
|
programming
|
I agree, but what amaze me is the amount of these people around. I could understand a small closed chamber of people sharing the mindset, but not thousands and thousands of people rallying to the point they are nearly as active as here. This elitism "don't show emotions, don't show weakness, harass anyone that does, don't help others" mindset is clearly disruptive in our society, and their amount seems to only keep growing.
|
programming
|
> Given that what to some people is friendly banter, to other people watching may be insulting.
While there are limits to what is reasonable, being respectful to people who are offended by things that don't offend you is _literally the whole point_ of community codes of conduct.
If you think there are specific places where it has been mis/over-applied, argue those specific cases. Banning trolling or using "be excellent to each other" as a motto should never be controversial steps for open source communities.
|
programming
|
> the commenter [taking pleasure] in others suffering … is [not] a sign of good community
While I agree on the principle, is this really the guy you wanna single out for that? Schadenfreude is a significant part of reddit and a part of life, and people enjoying seeing bigots react to not being allowed to openly engage in harrassment is probably the least important societal problem I've heard of. What does it say about this community, if the one bit of schadenfreude we target as "not okay" is some guy pointing out bigoted reactions to a relevant discussion topic and taking pleasure in doing so?
As for the linking to 4chan: it's a reasonable concern, but it's relevant in the sense of illustrating how (badly) the greater community is reacting to the new CoC. So on the grounds of relevance, there's good reason to be linking to it. And disallowing inter-community links on principle sounds like a bad policy liable to segregate the internet even more than it already is.
|
programming
|
That the intolerance paradox is in itself a paradox is an admission that the axiomatic definition is incoherent.
Do mind though that Popper's paradox cannot be used in the realm of speech. The intolerance he's talking about in that passage of the book is physical in nature, it's the silencing in itself that is intolerant, to reverse the roles is a misconstruction of the point of the whole thing. If people are illiberal and use violence then we must use violence to defend liberalism is his point. But absolute freedom of expression is part of liberalism in this context.
Mill's more coherent anyway.
|
programming
|
Yep, he said that. And whether he's 100% engaged or 80% engaged or worse, you won't know until decades later when Linus talks about it in some interview. "Yeah after the entire leadership turned on the emotional blackmail, I just stopped caring. My first act was to take a break. The world didn't end. I put more work onto other people and explained that I was raising leaders, or sometimes that I couldn't respond to the work in a CoC-compliant manner. Gradually I made a game of narrowing down what work people wouldn't take from me, and seeing how long it could go undone. People would ask for direct feedback and I'd just not give it, and when challenged--well, you know, I don't want to be *toxic*. After all the abuse I got, it was fun to just sort of ride the project, to be the only one that knew I wasn't completely in it. And then when [name redacted due to laws of time travel] committed the patch would go on to kill the project dead, I saw the problem right away--I can say that now, I checked with my lawyers very carefully--but I left it alone. Nobody else saw it. Then [date redacted] happened."
|
programming
|
I believe that only Linus himself can understand the pressure of being who he is and being responsible for the Linux Kernel. I don't live too far from where Linus lives here in Oregon and I can say that at least he gets paid well to do what he does, and he lives in a VERY nice place in this world. He is truly one of the most important people in the world, and that's a burden he must carry. In my own position I probably have something like .001 percent of the same burden and it's all I can handle.
|
programming
|
Two can play at this game.
"Honestly, I wish people close to me had been more straightforward with confronting my leadership style earlier on. A lot of the problems that I blew up at were relatively minor, and they drove away large numbers of talented developers who ended up performing really well in other projects. This worked in two stages - people whom I personally offended, and people who heard about my reputation and chose to stay away. If I'd been able to lead better, those people would have provided valuable work for Linux instead of working elsewhere. I could have provided the exact same feedback and kept my high standard for code quality, but it was gratifying to be an asshole to people regardless of the consequences."
|
programming
|
Indeed. And you could look for evidence for your obnoxiously silly fantasy by looking for talented developers who were driven off. The best source of that might be successful people reminiscing about having decided to not go into Linux development. But you have to question how sincere they are about pinning that on Linus -- it's an easy thing to *say*. You can also look for remarks from new Linux developers. "The Bill-and-Ted CoC that Linux had terrified me, but when I heard that they swapped it out for the one by the SJW that got fired for making people uncomfortable at Github, I knew that Linux would finally be a safe space for people like me."
My prediction could probably be falsified with just a chart of posts by Linus over time to the mailing list, before and after this announcement. If he's about as active, then it's hard to argue that he's significantly less engaged. If he's significantly less active, you could still suggest that he is just as engaged and just as *effective*, but is only saying less to avoid "being toxic".
|
programming
|
We'll have to see, but I see this as him committing to refrain from "Mauro, shut the fuck up" rather than "I'm too scared of being perceived as abusive to hold a high standard of code quality and critique Mauro's shitty code."
---
My fantasy is silly and obnoxious, but so is yours.
Anecdotally, I've never been in an environment where someone goes "lol fuck it, I'm not allowed to be a jerk anymore, so I'm just going to let things go to hell." I *have* been in multiple environments where complacent management allowed a project lead to be a thundering asshole, and the project failed because all of the decent people promptly found new jobs.
|
programming
|
Honestly in the long run this would be a good thing, for him to hand stuff off. The man isn't going to live forever, and we are going to need a Linux to exist for a good long time. It's best that there are 100 people that, together, can do everything Linus does and can hand off their knowledge to others so it doesn't have to die. It's fucking dangerous to put all your eggs in one basket with one developer.
|
programming
|
Readability is totally dependent on a length. If one concept is expressed in longer than what you can read in a single glance, or even worse, in more than one page, you'll spend more time reading and understanding it than if it is short and simple.
If something needs clarification, if there are important details - split. Tell the long story short first, then elaborate bit by bit. That's another thing where Literate Programming shines, and the "clean code" approach fails.
|
programming
|
Ah, it's simple... once you understand what is going on :-)
"Simplicity" is always a judgment relative to what you know already.
And no, I don't enjoy APL code.
Anyway, the point was: Too verbose and it's hard to read. Not verbose enough, by using too many symbols, and it's hard to read. Code is different from what you are used to, and it's hard to read. Even if the code is well written, if you see it the first time, and there's no view-from-above introduction in the comments, it's hard to read until you've read enough of it and know your way around it.
|
programming
|
> "Simplicity" is always a judgment relative to what you know already.
Simplicity with a huge context is not a simplicity. That's why the simplest possible language is a language that speaks in terms of your problem domain - this way you stay in a single, minimal possible context. While a complex language with complex rules, that you must keep in mind *in addition* to the complexity of the problem your code is solving, will never be "simple".
|
programming
|
Well, yes and no. I'm just talking about what I thought people were talking about. Yes in that I agree with you completely. No in that there is a pretty common opinion that you don't need to specify the type of something because the language can do it for you in one way or another. That's what I thought this was about.
So, take JavaScript, for example, they have `var` with no type specified and *get along just fine* (right?).
In C#, fairly recently (at least in that it wasn't in the initial few versions of the language) a `var` keyword was introduced to let programmers let the compiler infer the type. It's main use is for anonymous types, but people use it for everything. Why write `int` or `string` or `SomeClass` when you can write `var` and it gets used pretty heavily for that.
I didn't read the article that closely and thought the argument was about *that* not the action of typing out code. Although, they are connected in that one of the main arguments for `var` is that they don't have to type out types the compiler can infer.
Anyways, yes, I agree with you completely on:
> Those bear semantics - and therefore do not contribute to verbosity. If you remove the types, you'll have to convey the same information otherwise.
And that was my point about types. I get tired of people saying types don't really matter.
|
programming
|
Yes, they use a signed integer in Unix. As well as in NTP, and in fact most standards of representing time, as shown in the [Wikipedia article I have already posted](https://en.m.wikipedia.org/wiki/Year_2038_problem). It is a signed integer because [early C did not have unsigned integers](https://unix.stackexchange.com/questions/25361/why-does-unix-store-timestamps-in-a-signed-integer), and also allows representing time before 1970, which might have been an important consideration since [computers were in use before Unix](https://en.m.wikipedia.org/wiki/History_of_operating_systems).
Please, before you post comments or criticisms like these, educate yourself on the subject of discussion. It shouldn't be a shocking revelation that the affected standards use signed 32-bit integers. It has been stated multiple times at this point, including the aforementioned Wikipedia article.
|
programming
|
I really agree with this. As a relatively new programmer, I don't really get why everything is so slow. A Pentium G4560 can run modern AAA games no sweat, but it struggles when trying to render a web page with a few too many checkboxes? Really?
When I make something, I want it to run well on the "crappy low-end" laptops sporting A6 processors. Whenever I see the slightest stutter, I try to fix it. But unfortunately sometimes I simply can't get around the limitations of whatever framework I'm using.
I think back to when I used Windows XP and browsed the web with ease. These days, websites are functionally and even sometimes visually the same, yet it stutters like mad. Why? I can't even run the Reddit redesign or Discord on a $300 laptop without wanting to shoot myself.
The waste of resources is completely ridiculous and unnecessary. Hopefully as we continue to come towards limitations in hardware improvement, we'll be forced to make improvements in the software.
|
programming
|
1) Game engines are a colossal task in and of themselves. Where one person can create a webpage in minutes a game engine was built by dozens of employees over a year.
2) Hardware is specifically built to make games faster since it's the driving force of hardware improvements.
3) A webpage with too many checkboxes? It's not the checkboxes it's the ads running in the background, bad decisions by devs to make ajax requests every time the mouse moves or other such non-sense. Most reasonable websites run perfectly fine on my computers.
|
programming
|
The trick is that games can sacrifice whatever else needs to be sacrificed to go fast.
If they need to render that checkbox as one static image on top of another static image... ship it. Vs the browser has to follow a spec. If the spec says so, then they have to dutifully generate just the right soft drop shadow and pixel exact 1px edges on the checkbox, with exactly the right CSS transform tween when you click it, whether it performs well or not. And the specs multiply, now there's decades worth of specs layered up on top of one and other, and they all have to be followed as exactly as possible.
That's for browsers of course. For desktop applications... they don't have nearly as many excuses.
|
programming
|
> As a relatively new programmer, I don't really get why everything is so slow.
It's very simple: programmers get paid to deliver a piece of software/functionality, and stop once it works on the target machine. A $300 A6 laptop is not the target machine.
That's also what business expects. If you are assigned a task and will take 2-3 times as much time as others because you are optimizing everything, it will reflect badly on you.
Or think about it this way. You and your competitor are both building an app that will slice your bread. After 1 year, your competitor has a slow 1.5GB app running in Electron debug mode. Millions of people buy it since it's the best thing since sliced bread eh.
Meanwhile, after 2 years your 1.2MB app of handcrafted assembly does the same thing. Just like 101 other knockoffs that were slapped together in the mean time. A few people find your app and are amazed, but you have nowhere near the market share as that "unoptimized piece of crap" #1 competitor.
|
programming
|
> If you are assigned a task and will take 2-3 times as much time as others because you are optimizing everything, it will reflect badly on you.
This just means the costs are not assigned properly. Right now, it's the end-user who pays the cost, in frustration, delays, crashes, and other effects of bloat. The ads and privacy mining still function and pay the bills, all the competitors are slow for the same reason, and people have been frog-boiled into thinking this is just the way computers work.
It's the tiny, tiny minority of us who remember the world where optimization was necessary simply because of resource scarcity who understand, and we're even a small minority in tech circles, unfortunately. We have yet to reach the right watershed moment that forces optimization for other reasons.
|
programming
|
I disagree with some points. Sometimes there are more added features that is not visible in the apps. Security patches are increasing computational and memory costs, which is the best example here.
If you compare website today with win95 era, it's vastly different. Resposive layout makes everything easier. Have you remembered how much css hacks are needed until css3? Now we can use \`calc\` css3 feature to mitigate some. WebSocket and localStorage are features that is hidden, but not useless and not free.
Media are getting better, such as higher res images averagely. 3d models get more polygons.
Though I agree with text editor one, for developers there are some improvement in past year with VSCode (or more native sublime text), even MS visual studio is improving in performance.
And in case of pushing the limitation of optimization, I thing Factorio is somehow achieving it with how big scale it can get in one game.
|
programming
|
The Factorio developers have done all sorts of optimization work. I estimate the maximum usable factory size now is about 100-500 times what it once was.
For example, conveyor belts are now timer-lists. They wrote a blog post about this. Originally, conveyor belts would scan for items sitting on them and update their position, every game tick. Now, placing an item on a conveyor belt adds it to a priority queue, and the game calculates at which tick number the item will reach the end (or next checkpoint), and doesn't touch the item that tick number - or if it's currently on screen or being affected by something other than a conveyor belt.
You can make huge train networks and the game internally constructs multiple layers of graph structures, each one having less detail than the last. Then it computes a path on the least detailed layer and uses the more detailed layers to refine it, instead of computing the path on the most detailed layer.
One alien will roughly follow the path of another nearby alien going to the same target. This saves on pathfinding computation because the following alien doesn't need to run the pathfinder at all. That's why aliens travel in groups (that and the obvious reason of having more firepower).
It makes use of the Data-Driven Design and Structure-of-Arrays patterns. Each electrically powered object has an ElectricEnergyAcceptor (not actual name) object associated with it. Except all of these are actually stored in a vector in the ElectricityNetwork object. Every tick the electricity network runs through all the energy acceptors on that network, utilizing space locality. There's a *whole lot* (or maybe just a moderate amount) of special case code for when you plug an object into two networks, which is possible to do and works seamlessly, in which case one network has to update an acceptor owned by a different network.
|
programming
|
> Security patches are increasing computational and memory costs, which is the best example here.
No, they're the *worst* example. Paraphrasing [DJB](https://en.wikipedia.org/wiki/Daniel_J._Bernstein): correct software is software that satisfies the requirements. Secure software is software that satisfies the security requirement. Security requirements are a subset of all requirements. Therefore, correct software is secure.
This is not just trivially true. There are practical implications as well: security breaches always trace back to some *error* in the program or it's dependencies somewhere. Even Spectre and Meltdown trace back to a CPU design error, or at the very least a mismatch between the assumptions programs make, and how the CPU actually works.
Now, the easiest way to make sure your software is correct is to make it _small_. Little source code, few dependencies. Of course, one gotta have features, but security isn't one that generates bloat. (Except when you use encryption, but even that takes very little code.)
|
programming
|
I'm not saying that security patch bloats the software, I said it is increasing processing cost.
I don't really know about inner working of CPU and details of meltdown / spectre. AFAIK for meltdown, it is exploiting optimization hack of intel CPU. CMIIW, that optimization is to bypass security procedure that is costly. I think the patch is turned off that optimization hack, so it's performance is down by 30%, which does not happen to ARM CPU. I'm not examining too deep into Spectre, I just learn that spectre affect both intel and arm, and no solution for it. I think both are not a good example since it just affect intel's and not arm, so it's just design mistake from intel.
However for website, it is more or less true. From user authentication alone, there are encryption with bcrypt or argon2 which is costly (in performance), and minimum session / cookies authentication. There are more authentication in form of jwt, oauth, public / private ssh and two factor auth. And we still need to protect against xss, csrf, input tempering and many more.
In my experience, developing framework / package that handle all (or most) of those security vulnerabilities, which usually can be configured is not easy, and usually bloated. Now if every developer will develop their own security implementation to avoid bloating code, we don't know how many vulnerabilities will present for their non-tested implementation. Not to mention how many developer hours will be poured into that.
So yes, in some cases it isn't but in many cases, usually security patches bring size and cost up. Not big, but adds up.
And no, to cut times needed and to apply "don't reinvent the wheels", avoiding dependencies isn't the answer either.
|
programming
|
> I'm not saying that security patch bloats the software, I said it is increasing processing cost.
Spectre/Meltdown issues are the only kind that makes stuff slower. At the software side (and 99.9% of vulns are about software screwups), there is no need for such penalty (though checking your bounds systematically does help prevent some mistakes).
For web site, yes, password key derivation is expensive. That's about the only expensive crypto operation ever (there's crypto currencies, but they're just wasteful madness). Session cookies however are not passwords, so you can just hash them.
|
programming
|
This is so painfully stupid. Argh. And the fact that a bunch of people here support this drivel is a great example why most software developers are code monkeys and should never have a place in management or leading positions.
Planes, cars, and what have you are designed that way because it makes sense economically. Their quality is measured by getting the most output with the least input, i.e. ROI. If one actually measured software projects in the same manner, then one could argue that many of those bloated popular projects are extremely well-designed.
The author and their supporters are free to spend 10 times more dev time to speed up their programs by 30% and make them 40% lighter to do what? To lose to a competitor's product because they're over 10 times cheaper and have been on the market way longer but take 2 seconds longer to boot up?
Get your product to the market with as little work as possible, then start improving and optimizing as needed.
&#x200B;
>The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; **premature optimization is the root of all evil (or at least most of it) in programming**.
\-- Donald Knuth
&#x200B;
EDIT: To the downvoters, [here's](https://www.reddit.com/r/programming/comments/9go8ul/software_disenchantment/e66oluw) a good explanation. Much better than I can be arsed to write.
|
programming
|
Well, you seem to only have read the comments, or... maybe like half of the first paragraph of the article. It's not about comparing programs to cars.
Anyways, you are not less confused than the people you claim to be confused. You don't understand what value is, and, subsequently, count it wrong. You think, that the value of a program is entirely defined by market in an unsophisticated "supply and demand" kind of way. Here's the first obvious problem with it:
by what time do you stop counting the income produced by the program? Most profitable software today is sold as subscription, not an OTC package that is sold in a single transaction and then forgotten entirely. Corporations love subscriptions! But... this creates a problem. An OTC purchase may pretend to put a very exact price on a product (although, even before subscription became so popular, good sales / marketing people understood that there are also things like "customer loyalty", "perception of quality" etc. which weren't easy to put a price on); with service-based economy of software, how are you going to measure it? Ever heard about https://en.wikipedia.org/wiki/St._Petersburg_paradox ? It is applicable to our situation in the following way:
The more you improve the quality of your product, the greater the reward will be, but, improving quality infinitely will most certainly prevent you from being rewarded for it. So, you need to find the right time to stop.
Which brings us to the second problem with your argument: very short intervals used to develop software. This is just a historical artifact of how taxes are levied, how our education system is designed and many other things. For agricultural economy it makes perfect sense to tax everyone once a year, because, it, basically, repeats on a yearly cycle. Many other businesses have even shorter cycles, so quarterly planning makes perfect sense. But R&D doesn't really work that way. It may take decades... or days. Unfortunately, it's mostly decades, and not days. Similarly, education system, because of its own similar incentives, uses very short cycles to produce anything worth of grading. Even though learning times today are at their historical maximum, the cycle time is, perhaps, at its minimum. In other words, students are tasked with exercises which they have to complete in 15 minutes, or an hour, at most a day. A week-long project is a huge event! The timing aspect is never made explicit in education system, and so a lot of people fall into this trap of thinking that what they studied is, in principle, always done in short intervals, that there's something to show after each such interval, and that the individual value of these time slices can be integrated to obtain the tally.
Long story short: if you base your value judgement on short-term, close-term estimates, you will most likely miss the bigger picture. You will be like the million of others working on a transient and an uninteresting problem of designing an e-commerce web-site, a problem that didn't require even a 0.01% of attention it is given, while, for example, tremendous gains to public health can be had, if anyone was willing to invest into automating hospitals, or governments, or designing more efficient agricultural systems etc.
|
programming
|
Not writing slow code in the first place doesn't require any additional effort and the decisions that have the largest impact won't be found in a CPU profile for someone to optimize later.
> "The conventional wisdom shared by many of today’s software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny-wise-and-pound-foolish programmers, who can’t debug or maintain their "optimized" programs. In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering." - Donald Knuth
|
programming
|
Nobody writes slow code intentionally. If writing slow code and writing fast code took the same effort, as you claim, people would alway write fast code. Obivously, it does take effort to write fast code and even more effort to write fast code that's maintainable (see the quote you posted).
What's key is whether that additional effort is worth it. It is, if there's demand for it. If there's no demand because users don't particularly care for that optimization, then it's a waste of resources.
As a programmer I share the desire to write beautiful optimized code. As a businessman I think that it is a reckless waste of resources.
|
programming
|
\> discovering
So at the time they thought it's a good idea, they didn't intentionally go "Let's use Python, because screw fast programs".
&#x200B;
They later realized Python was simply not enough. To come to that conclusion before writing any code, they would have to expend a ton of effort. Arguably, the entire project is them finding out that Python isn't enough for their case.
&#x200B;
It's literally a counterpoint to your "writing fast code doesn't take any additional effort".
|
programming
|
I know it isn't intentional. I just don't see any indication that the tradeoffs are even considered. It would be hard to justify [25,000 allocations per keystroke](https://groups.google.com/a/chromium.org/d/msg/chromium-dev/EUqoIz2iFU4/kPZ5ZK0K3gEJ) as helping the code base be more maintainable.
The problem is worse for dynamic languages. Without changing any of the logic but rewriting a few lines to not be actively hostile towards the JIT/GC the difference can be closer to 50,000%. That is a *qualitative* improvement in what your application is capable of, what hardware it can run on, and how the rest of the code can be structured (goodbye cache invalidation). An extreme example but not a hypothetical one.
Optimizations are a different issue. Go ahead and use bubble sort if it makes sense for whatever reason.
|
programming
|
Ah a classic rehash: guy from the “software is art” camp upset with the reality that “software is business.” There’s so many economic reasons why things are the way they are. Better than fast code is maintainable code with test coverage. Get it out the door, optimize if there’s a need with the peace of mind that you aren’t breaking business requirements. Premature optimization is a waste of time and resources. I’m all for eliminating “dirty hacks” culture and addressing technical debt regularly, but there’s a middle ground that this author, who wants everyone to more or less altruistically optimize their code for the unnecessary god of speed and tiny size
|
programming
|
Which part of my response seems like I didn’t read the article? I simply disagreed. The guy wants me to optimize all my apps. My boss wants me to get it out the door. I want to do other things in my spare time than optimizing apps that my boss owns. So during the work hours, i do what boss man says. I make the code as good as possible, i push back and demand when things need to change. But at the end of the day, I do what my boss dictates which is what the business dictates because I have neither the control nor the power to do otherwise without working overtime. And I already work overtime on what the boss man says.
|
programming
|
More important than discussing the hierarchies of business structures and the risk analysis of consistently fighting with the guy who pays me, I'd prefer to address the fact that I disagree with the guy in the article. Viewing speed and memory consumption as the main metrics for software quality is far from holistic. There's never been a free lunch, so let's look at tradeoffs. And when we look at tradeoff's, let's consider them from the perspective of the business, which is guided by opportunity cost aka the "the benefits an individual, investor or business misses out on when choosing one alternative over another."
1). Optimization vs developer time spent doing literally anything else aka new features. While the author draws the comparison to cars, where speed and fuel consumption are the selling points, I find this to not really work in software. Have you ever sat in a sales meeting for a software product? I've sat in as a technical aid on sales presentations for a Cash Management platform for business banking. Trust me, no one gave a shit about the time to First Meaningful Paint. All they wanted to know was whether our dashboard widgets could be dragged around b/c the competitor's can. Features sell.
2). Optimization vs maintainability. In DonaldKnuth's paper "StructuredProgrammingWithGoToStatements", he wrote in 1974: "Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%." Take using an ORM for example. In a new system that is iterating and frequently changing, handwritten SQL is not insulated at all from these changes and requires much more time to maintain and change than using an ORM. You can always drop into raw SQL when needed, but that case generally won't present itself until there's load, at which point it's not premature.
3). Optimization vs time not in the market. Software has a lot of competition and achieving market share is probably priority number 1 in the beginning of a startup. Look at the history of Windows. That's why facebook champions the phrase "Move Fast and Break Things." Both of these companies succeeded because they achieved market share, even if the product had bugs along the way. What do I need to optimize when there's 100 users on my system? Let's assume for a second that I'm releasing an app that no one is using. Let's say I could write a REST endpoint that achieves a business need in 1 minute that suffers from an N + 1 problem using my ORM, or in 8 hours with raw SQL since the query is highly dynamic and requires a lot of edge case handling around optional parameters when building the SQL String. I'd argue pragmatism says slap a TODO: optimize this n + 1 query and get your product out to market. You'll only have trouble under load. And needing to optimize due to load is almost always a good sign because you're probably making money at that point.
4). app payload size vs maintainability, correctness, and time to market. Jquery increased payload size but made cross browser compatibility so much easier to achieve. Nowadays, people are migrating from backend generate html with doc.ready run, small individual js files to using tools like react, ember, angular, etc. These tools majorly increase payload size. But, state management in a highly dynamic, feature rich web application is so terrible when there's two sets of truth, the DOM and your JS. Again, if we consider point 1 that features sell, having a tool that allows us to write any feature with relative easy as quickly as possible helps maximize this.
So while the author is all upset that things aren't optimal, it's not really hard to understand why things are why they are. I think choosing optimization for the sake of being perfect is an incorrect choice and makes no sense in the business world, the world in which most software developers work. So, I'm going to continue doing what my boss says because these choices are in fact the right choices for the business.
|
programming
|
>Souq is an Arabic shopping website... it's entire fucking market is the middle east.
That is why I said middle eastern countries... that is where this censorship is coming from.
>I've come to the conclusion that you have have no idea what you're talking about. Amazon apparently has more power than the Western Powers to change opinions in the middle east.
They should. They have the ability to act unilaterally. If they actually exercised their power instead of letting countries hold customers hostage from the internet they could hold the internet hostage from those countries. A handful of tech companies could cooperate and effectively force a country off the modern internet if they wanted to.
"End censorship worldwide or else"
But they don't have the guts to do it apparently "because quarterly profits could go down a few percent" when they should be worried about growing exponentially over the next decade. Quarterly profits be damned.
You don't grow exponentially when you are afraid of governments. You grow exponentially when they are afraid of you.
But according to you ruthlessness is not good for business. Caving in to everyone else's demands is.
|
programming
|
Where are you getting $100 on the dollar in a week from this?
Risk/reward analysis:
Action: Amazon allows Signal on their platform.
Risk: Blocks of datacenter IPs get firewalled by nations. Huge loss of revenue to Amazon's customers, leading to loss of trust, and leading to loss of revenue and political clout to avoid such situations as customers move to competitors.
Probability: Low.
Reward: $1,000/mo., gross income, not pure profit. (I'm being generous.)
Action: Amazon kicks Signal off of their platform.
Risk: Loss of Signal's sales, about $1,000/mo.
Probability: Certain.
Reward: They get to continue providing services to companies that have customers in the interested nations.
Whether I agree with state-level censorship on a personal basis or not (I don't), it's not a personal decision. It's a legal and financial decision.
|
programming
|
You are missing that it is a game of chicken. No government wants their whole populace angry with them over breaking the internet to stop a chat app. Especially when blocking Signal just gives Signal the impetus to improve their tech to be more resistant to this kind of blocking. It is a temporary fix for them at best.
Not caving means these governments (and other governments) are less inclined to try to stop them in the future for fear of punishment.
|
programming
|
These governments are already risking upsetting their populations by censoring domains in the first place.
These governments are the ones less likely to flinch, and already have state sponsored news to spread propaganda... "On News at 11, the popular online store [Souq.com](https://Souq.com) was caught spreading pro-terrorist propaganda. Technicians for the store claim that American hackers were responsible, and are working to clean it up. To protect our people from American hackers, the store and several other similarly hacked sites that use the same hacked datacenter have been temporarily added to the national firewall until it can be thoroughly cleaned up."
And, since Amazon would be blocked, they won't be able to say "Nu-uh, they're lying!"
|
programming
|
Which is why INFORMATION technology companies need to work together in solidarity, even with competitors, to be rabidly anti-censorship. Censorship is bad for all of them, both financially and philosophically. If they shut down Amazon and lie about it Google's front page needs to tell people their government is lying to them and censoring Amazon.
You also are underestimating the population placating effect of entertainment media. Shut off all video-streaming, porn, and video-games in the United States and there would be mass violence in the streets within the week. I am sure the middle-east would be even worse, even faster.
|
programming
|
Amazon's AWS only serves about a third of _cloud_ computing.
If a region's version of Netflix gets shut down from this, I'm sure they'll switch to a different cloud service quickly, and start building their own dedicated data centers to keep it from happening again.
China is fine with cutting off Google... They've done it before, even after it got big. Why wouldn't the UAE?
By taking a stand against censorship, all that these companies would be doing is driving that country's population to doing business with companies that are fine being censored.
And, it's not like sending messages through chat applications are the only way to spread subversive messages. Steganography can never be blocked. Just make an account of a photo sharing site, send a "normal" image to the people you want to talk to, and when you want to send something secret, post a version of that image that has a hidden message encoded in it. It's very easy to do, extremely difficult to detect, and impossible to prevent.
You don't need to risk blocking the mom-and-pop webstores that are renting $20/mo AWS servers over it. You don't need to risk getting huge shopping sites shut down.
It's important to work against oppressive regimes. It's not tech's job, and it's not worth risking taking tech away from others who need that tech to support their livelihoods. What happens if Souq.com gets shut down there? The Amazon execs mumble about quarterly profits and thin bonus checks while shuttering warehouses, and the local warehouse workers don't get paid.
|
programming
|
>Amazon's AWS only serves about a third of cloud computing.
Amazon + Google + Microsoft gets you over 75%. Why shouldn't they work together?
>And, it's not like sending messages through chat applications are the only way to spread subversive messages.
I don't think you understand what they are afraid of. It isn't subversive messages hidden in plain sight. It is quick-and-easy honest communication that frightens them. That is why they want to shut down signal. People know the thought police can't monitor it so they are free to speak their mind to each other. Then the people aren't censoring themselves out of fear and they can't have that.
> China is fine with cutting off Google... They've done it before, even after it got big. Why wouldn't the UAE?
The UAE is not China. They are far more reliant on companies outside their country for IT. It isn't like they have a thriving tech industry ready to compete. A conglomeration of tech companies could and should certainly push them around to do the right thing.
|
programming
|
FYI, omitting the "genes" metaphor, this is called a production rule system, and it's a classic approach to AI: [https://en.wikipedia.org/wiki/Production\_system\_(computer\_science)](https://en.wikipedia.org/wiki/Production_system_(computer_science))
Basically, we have a bunch of when-x-do-y rules, all of which get tested on each tick. Whenever a rule fires, some custom code reacts and does something, whatever. This usually involves some changes to the "knowledge base" or some other state.
With regard to shared global state - that's actually a traditional weakness. Rule systems that have a lot of global state which enables/disables different rules become pretty difficult to debug as they grow large. It's like having global variables in code which controls flow of execution - not ideal.
By the way, as an amusing anecdote - the original Age of Empires in the late 90s implemented all of the strategic AI as production rules. There's not much info left on that, but those scripts were huge and pretty complex. Some intro material can be found here: [http://aok.heavengames.com/cgi-bin/forums/display.cgi?action=ct&f=26,29,,30](http://aok.heavengames.com/cgi-bin/forums/display.cgi?action=ct&f=26,29,,30)
|
programming
|
Hey, thanks for the input!
I was not made aware of the similarities to production rule systems whilst I was developing the initial idea and asking for opinions, but thanks for bringing it up.
I did have plans to add rule priorities and inhibitors (the latter of which is more of a concept found in DNA), for better conflict resolution.
To help allay your second concern of shared global state being a weakness: someone had already made this observation, and suggested adding name-spacing or sub-states so that every action (rule) can voluntarily limit itself to an independent nested sub-state (addressed as `state.substate.variable`, etc).
While I like this idea very much, it would require deep immutability of the state objects, which I was hesitant to add without careful consideration because of performance reason.
> By the way, as an amusing anecdote - the original Age of Empires in the late 90s implemented all of the strategic AI as production rules. There's not much info left on that, but those scripts were huge and pretty complex. Some intro material can be found here: ...
I find this super interesting! This is exactly what I had in mind while coming up with this idea: to use it for dynamic multi-agent systems where agents can make informed decision while potentially interacting and exchanging information. (NPCs in a game come to mind).
The ability to recombine logic via sexual reproduction would mean that agents can reproduce and emergent behaviour can arise, potentially in a meaningful way, assuming that both parents share a common baseline of attributing the same meaning to the same variables, for example: NPCs with `health` and `stamina` in an RPG game.
Even simple mutations can be implemented by swapping conditions between two actions.
But that's all theory for future development. To my knowledge, there aren't any software design patterns that encapsulate normal code this way and enable the paradigms of artificial life to be applied to normal code.
Also, it seems that production rule systems are an interesting idea, but very underrated? Why is that?
P.S: I chose TypeScript for the reference implementation because it's easier to prototype in than a fully compiled language like C++, but still has static typing and well defined interfaces.
Thanks for taking the time to respond and for the constructive criticism, I truly appreciate it.
|
programming
|
Well, it’s the oldest software pattern in history at an age billions of years old, which means it was quite literally “battle tested” by evolution for ages.
This is a simplified version with major omissions and some adaptations/compromises/additions for use in state machines. The reference implementation is stable and has a relatively high code coverage in testing. (the core part at the very least)
The reason this version is alpha is that I wish to incorporate more ideas into the spec that could help make it more useful. That’s why I’m requesting peer review from other experienced developers with an eye for detail.
I might have overlooked something here and there. I’d like more input before making it official.
|
programming
|
> That's not quite what I meant. Just that software patterns are typically a bit more cookbook-like, and are extracted from successful projects. This might be a bit wooly when compared with Flyweight or MVC.
That's true in a way, most patterns are much more established than what I have now, but it's still early to call this a well-established pattern. For now it seem more of a model for state machines than a conventional and generic software development pattern.
My original justification for calling it a design pattern is that it can cross language boundaries and be implemented in any Turing-complete language with the proviso of object/dictionary support or the presence of a similar feature.
> And to be clear you are modelling some aspects of evolution, at a relatively high level of abstraction, not implementing evolution in software. That's a really important distinction - the map is not the territory, and all that.
Yes, because I wouldn't call this a suitable approach for genetic algorithms, since the action blocks are treated as discrete/fixed units with fixed behaviours that can't be mutated by flipping a bit in a bit-string.
While theoretically possible with JavaScript, this has very low chances of successfully altering the logic without rendering the code syntactically invalid. Unless we introduce a syntax-aware mutation algorithm to mutate the AST meaningfully and/or according to specific rules that are normally enforced in natural evolution by the laws of physics governing which amino-acids are physically possible at what position in a DNA/RNA strand.
The only possible mutation available right now is exchanging activation conditions between two given actions, which is very weak in terms of GA development.
(I have spotty knowledge here so please excuse any shortcomings in nomenclature or scientific inaccuracy on my part in the above paragraph. I study artificial life as a hobby I enjoy and nothing more)
The recombination aspect is still useful though, especially with how simple it is to compose a new state machine out of two distinct behavioural sets in the reference implementation with a single call: `const machine3 = machine1.recombine(machine2)`. The new `machine3` will exhibit the behaviour of both parents simultaneously. (with the possibility of emergent behaviour)
[Basic tests for recombination](https://github.com/voodooattack/when-ts/blob/master/tests/recombination.test.ts)
I'm writing a new, practical example that will use recombination in a meaningful manner, just give me a bit and I will post it here.
> But if we're going to go that route, does it out-perform simulated annealing? Most GA approaches don't. You may want to look at the GP literature - I think GP is closer in spirit to what you're trying to achieve than GA is.
Genetic Programming would be a better umbrella for this to fall under, so I agree on that.
|
programming
|
I'll look into that book, thanks!
I also think GP would be more interesting if it could be applied to normal, every day code. And I agree with the problem with fragility, that's why there is no mutation involved, so no risk of things breaking from a syntax error or overtly complex mutation logic. For GP applied to general computing like this, every 'gene' should be well defined, perform a specific function, and make sense from a programming perspective.
There is no place for mutation in this pattern just yet for a reason. Emergent behaviour can still be obtained by mixing and matching well-defined behaviour though. There is just no way for completely original behaviour/logic to evolve like in nature.
This is why I hesitate to call this a GP/GA pattern. It treads a very thin line between the worlds of GP/GA and traditional software, and can be tipped one way or the other by adding or removing features.
I just felt compelled to mention that mutation was possible.
|
programming
|
Don't rationalize stupid and dysfunctional corporate practices.
Software projects require hundreds of man-years of work because the manager's pay grade depends on the number of his reports. This means that hiring 10 idiots to twiddle their thumbs is financially incentivized, while hiring one genius to code it in a weekend is punished.
On a larger scale, this pattern continues. Do you think Google would have a half a trillion dollar capitalization if they had 1/10th the headcount? Not a chance! (Note that the market in this case doesn't care what this horde of programmers is actually doing. Headcount == power == money, even if they're a negative contribution to productivity.)
Ultimately, this is a management fuckup, managers don't understand that there are different types of work.
Routine (even if highly qualified) work can scale: if you're in construction, hiring more welders means more buildings built and more money made.
Creative work can't scale: if you're a publishing house, hiring more writers won't net you more bestsellers or make you more money.
Programming is creative work that doesn't scale, even if the end result is just boring CRUD-type database stuff.
Managers are crap at managing, so give it several decades and a couple economic meltdowns before they actually start doing their job properly.
|
programming
|
More writers => more books => more money - at least if you hire writers that are good enough that it's worth printing what they write
More programmers => more software built => more money
It scales just as well as the welders in your example. You can't throw an unlimited number of programmers at one program or an unlimited number of writers at one book just like you can't throw an unlimited number of welders at one house and expect it to be finished faster in proportion to the number of people who work on it. But what you can do is throw an unlimited number of people at an unlimited number of projects and get unlimited amounts of money ;)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.