subreddit
stringclasses
11 values
text
stringlengths
246
28.5k
techsupport
Obviously I cant say for how the infrastructure is set up and file shares and so on. If you think it would be a quick reboot and everything work after you could consider asking the CEO for confirmation to reboot the server at X time for a downtime of file access for 15m etc. Emails are not on this server? If they are then yeah illd just leave it till out of hours. But remember to get confirmation for everything for CYA. You are an apprentice and realisticly shouldnt be doing any of that stuff with out someone overseeing you. So what I would do is send a email to someone who has the power/auth to restart the server like your IT Manager or CEO. Explain that you have this issue and you can only resolve this by restarting the server, outline what it will inpact and why you want to do it out of hours. Also explain any concernce you may have when the server comes back online so they know the full scope of issues that may happen.
techsupport
It's a bit of an old server unfortunately. When I have rebooted it under supervision from my manager before it has taken up to 30 mins for a full update/reboot. The IT dept currently consists of myself and my manager who is out of the country on leave. My acting manager is form the finance dept but has been very supportive and I have permission from them to reboot the server after office hours. It suits me tbh, could do with the overtime. Our email is web based but our end users store all their files on the server instead of locally so I can't really justify office hours downtime. I am currently putting a guide together to send out to the whole office so they can direct connect to the printer for the time being. If the issue escelates after rebooting the server I will consult my acting manager and either continue to direct print until the IT manager returns or seek 3rd party consultation. ​ Thanks again for your advice. I seem to have things somewhat under control now :)
techsupport
The Laptop and Desktop are both Y520 for some reason. https://www.lenovo.com/us/en/desktops-and-all-in-ones/legion-desktops/legion-y-series-desktops/Legion-Y520-Desktop/p/99LE9Y50275 I was just trying to find more info about it. What is the maximum supported ram? I know it only has two slots but will it handle 16gb sticks? How big is it? I want to put it in a new case, but is it some proprietary MB that will not fit in a normal case? Can it fit into a mATX case? I am sure I can open it up and see all the slots and what all is on the board but if I had a normal tech sheet it would tell me everything I need to know.
techsupport
Yeah, I saw that but because of the way other stuff is worded I think it means the most you can buy it with. Like for the graphics card it mentions the 1050 and 1060 but those are your only options. You can not buy the desktop with more than 16gb. But I wasn’t sure if that was because that’s the only way they sell it or if it’s because that’s all it will work with. I might just start with a new MB instead of a case.
techsupport
Seems like you already had the failing Dp-connection on your 1080. Saw your post 35 days earlier. Ik think this correlates to your current problems. I'm not sure if MSI would've accepted the RMA claim with just the DP-problems.. but i do know that a lot of people got their card swapped because of it. If you tried to yank the DP cable out, you problably mangled one of the pins inside of the connector, shorting the connector. Not deadly, but the card then refuses to boot/recognize.
techsupport
You've mentioned that the fans didn't spin when you powered up the computer. A quick Google around found that most 1080's have 60c fan limit. all temps under that threshold won't let the fan spin. * if you remove the PCI-E cable from the GPU, does it beep when you try to boot the computer? if yes, means that the 1080 do is active and is now screaming for electricity. if not, yeah then it's problably dead. * does your motherboard has post LED's? a BIOS reset could fix this, but you did mention that your RX580 isn't causing a problem in the same slot. So i guess this won't make a difference, but also couldn't hurt much.
techsupport
Pretty sure the video I linked will still apply. You just remove screws from the bottom and using something skinny (like a guitar pick) to pry the bottom shell away from the keyboard bezel. This will reveal all the internals. The hard drive is held in place by a screw or three. You just swap the mounting bracket surrounding the drive over to the new drive you want to put in, then just screw everything back together. Since the new hard drive will be empty you'll need to use recovery media to reinstall Windows.
techsupport
I apologise for it taking longer than a day. I've been very busy. I was able to successfully make a DVD with three shorter videos and a menu on it, and it plays on my PC, PS2, and PS3. I assume it will also work on my Blu-Ray/DVD player and anywhere else I might want it to work. These clips are just a few minutes each; they're my best YTPs I've ever made. I know the main project does not exceed the size of the DVD. It's probably just too big for DVDStyler to handle burning as a single clip. I'm leaning towards rerendering the main film in two parts, splitting them where a bit of a pause or stumble wouldn't disrupt the flow of it, but since that takes time and might not work, I'd like your opinion first. I would of course make it go from first to second part automatically, but since it's physical media, this won't be seamless, ergo finding a good spot for it.
techsupport
Hm. With an old 486, there are some trouble-shooting steps that a lot of younger PC builders might not remember. It *sounds* like your PC isn't detecting the OS, but that could be due to a bunch of different factors. 1. Did you check the HDD power connector and make sure that you plugged it in correctly? Some older Molex plugs weren't keyed and you could reverse the power plug, shorting or frying the drive. That's more common on floppies than HDDs. 2. Is the HDD's master/slave jumper on the right pair of pins, and is it on the correct header of the IDE ribbon cable? 3. I *think* that with a 486 you're past the era of having to manually set IRQs, but are you able to get into the BIOS and verify the HDD is detected? **[Here's](ftp://ftp.packardbell.com/pub/itemnr_old/ch1bios/ch1bios.pdf)** a manual for the old BIOS settings. Make sure these all make sense for the hardware you have. Consider setting some of these to low-end defaults or bare-minimum settings before you try to 'go big'. 4. When you formatted the drive it sounds like you used reasonable contemporary software. Are you sure you formatted it FAT or FAT32, and that you created a successful MBR? I seem to remember that manually building the MBR was a pain in the old days.
techsupport
Jurph has a good set of steps going here. I'll add one to check: 5. Does the system have a battery backup for the CMOS RAM? There are essentially two types: on-board, and external. The "external" is still inside the chassis but is not mechanically fastened to the mainboard. It might be a plastic battery holder with leads connected to the mainboard. The "on-board" is directly attached to the mainboard, usually with solder on older systems and a clip on newer ones. ​
techsupport
> I think that with a 486 you're past the era of having to manually set IRQs, but are you able to get into the BIOS and verify the HDD is detected? Here's a manual for the old BIOS settings. Make sure these all make sense for the hardware you have. Consider setting some of these to low-end defaults or bare-minimum settings before you try to 'go big'. Presumably they had to have done, if they were able to install DOS on it from floppy. edit - hold that thought...just read the rest of the thread.
techsupport
Have you tried a new harddrive? The description sounds like an HDD issue... To test: Load a version of DOS straight from the Floppy and see if it runs that way. Back in the day, before harddrives, you'd load your DOS via the floppy, and apps from additional floppies you'd put in would be loaded off those disks... So if the memory is good, you should be able to load an OS into memory alone. ​
techsupport
Sorry, I was using [executable.exe] as a placeholder for the name of the executable file. Sorry if that threw you off. The command prompt in any OS will have a working directory: a directory that is currently selected that will be used if you type a relative filename. For example, if I am currently in "C:\Windows\", typing "cmd.exe" would try to call ""C\Windows\cmd.exe" if such file exists. Trying CD [directory] would change that working directory, alternatively you can just call a executable by using it's full file name
techsupport
Assuming you're on 64bit windows? Extract the exe to your desktop or in a folder. Open command prompt and use the command cd to change directory. You can just copy what I did just make sure it's your username. Obviously it will fail to install since I don't have the network card installed. Have you tried just double clicking the exe? I don't see why it wouldn't just run on it's own. https://i.imgur.com/CCLVbY1.png cd C:\Users\YourUsername\Desktop\update Enter 82579VSKUW64e Enter
techsupport
I can't actually I don't have an imgur or nothing. But I've literally told you everything it says, its all one big volume, says "14.4 GB free" which is about right for a 16GB drive, but when I plug it in, Windows doesn't recognize it, so I go into disk management, and it says "healthy primary partition" and that's it. I can't delete the volume, I can't shrink it or even create new partitions anymore, whenever I go to format it, it acts as if it's being formatted, but when its done, nothing actually changes. I've tried quick and full format. Hey one thing I forgot, I did try out Etcher.io a while back, and flashed a Linux Mint iso onto it, could that be why it can't be deleted? Does etcher make things permanent or something?
techsupport
Windows 7 has no USB3 drivers, you need to do some trickery to get them to work past boot. That board has no USB2. I have a whole toolkit for getting windows 7 working on Ryzen. Do you have an older functional computer you can use to set it up? In general, transferring a drive to a new system without properly configuring it first never works well. You need to do some prep. PM me if you want the kit and instructions, it is fairly automated. Use windows 10 as a last resort, it is a disaster compared to 7, especially for the inexperienced user.
techsupport
I asked if I could control the computer beyond the bounds of the domain rules, as this computer is need for a special project, and I am the local site administrator. I have a domain account, but it has barley more power than a normal domain account. We need to access incognito mode and I want to switch the home page to our local server, as the computer is used for cyber security penetration testing for our site. I will not any credentials, as this is not a normal thing that I will do, so I do not need credentials to log in. I have a local admin account that I use, and it works for everything that I need to do. I don't see why I cannot use that to change OU, because I can use it to disconnect from the domain.
techsupport
I’m with /u/Monkey525. It’s not that what you’re asking to do isn’t possible it’s that you’re the “site administrator” but don’t know how to do these things. Your domain admin can do targeted or loopback policies fairly trivially if this was really necessary. As for using the admx or changing the permissions on the key, Domain Policies take precedence over local policies so you’re not going to be able to block domain policy (with some rare exceptions). Instead of focusing your energy on that why not find a way to work within the parameters you’ve been given? For example, you could edit the chrome shortcut and append the website of your choice so when someone clicks to launch chrome it launches the website you specify.
techsupport
> Do you have the TV set to output audio via SCART? I'm not sure it can output audio. In the TVs settings it says "analogue and video input from analogue or video devices". I have used SCART in a similar way before and have managed to output audio and video. Is it possible the TV don't support output over SCART > Also, can your DVD player even receive an input from SCART? Usually they're only output on players. I think it can. I've set it to "TV in" mode which i assume is for the SCART (i tried the other modes too)
techsupport
Off the top of my head, you are either overheating or your power supply is failing. First thing to do is check your cpu/gpu temps while you are gaming. [Core Temp](https://www.alcpu.com/CoreTemp/) for your CPU temps and [MSI Afterburner](https://www.msi.com/page/afterburner) for GPU temps. Max temp for your CPU is 60c and try to keep your GPU around 85c or less. If your temps are under those listed then I would swap out your power supply. I doubt your hard drive would cause you to crash once your game has started. Once you are gaming most of the game should be loaded in to RAM, which leads me to believe you are suffering from heat build up or your power supply is struggling to keep up as it slowly dies.
techsupport
Yes, that is exactly where you want to be. Be warned, it may take some trial and error. The idea is less power = less heat. Your CPU may perform just fine at a lower voltage, but it could also cause instability. So you might have to play around with it a bit. The goal should be to keep your cpu under 60c during load, with the highest core frequency and lowest Vcore possible. Kinda like overclocking in reverse.
techsupport
Whats my solution then? :/ I had everything working great until my wife got a new work laptop/docking station that she uses at home and all of the ports changed up on me. Should I just look into buying her a video card with dual DVI? The mobo is VGA and DVI. Do I need and stop relying on the integrated graphics of the CPU/Mobo? The goal is to basically allow her to use dual monitors for work and for personal use. Work is fine, but the VGA to DVI is fucking me up. She just changes inputs on the monitor to switch between the laptop or desktop. Maybe I should look into a KVM switch solution instead? She doesn't exactly like having to change the input via the monitors. Thanks
techsupport
These motherboards are pretty darn rare as the 7567 are still relatively new. Dell demands a full 700$ instead of fixing one single damaged piece on your motherboard? Holy crap we have a Apple copycat!!!!(that kinda rhymes). Seriously though,laptop companies are becoming douchebags because: 1.They lock down the BIOS and HW upgradeability as much as possible(like soldering things for example).And give users a middle finger in the face when trying to upgrade. 2.When users have a busted capacitor,instead of soldering a new one,they will "Oh,we are sorry(input bullshit reason) and so you will have to pay(input x higher than 500$)." or "Fuck off and buy new one". Instead of copying what makes Apple great(brainwashing their users) they copied everything else that makes Apple crappy-Louis Rossman.
techsupport
Why don't you part the parts and case,sell it and the motherboard too? Save a little bit more and combine it with the money from this crap and get yourself a better one. Besides,stay away from Dell.They are plagued with issues this year and I don't think it's going to better.One guy whom I knew had his Dell G7 shipped with the underpowered PSU.They refused to replace it even after acknowledged that this is an issue.
techsupport
So one hd and one dvd. Toshiba need to be in the list of bootable devices. If it is, then you may have a problem with bios settings. A hd missing boot partition would show a clear error about device being not bootable. If so, take note of actual settings, and look for bios reset. You'll loose config, but may solve your problem. Also look for the error log if you have one.
techsupport
I was going to say you don't have to connect that PC to the router upstairs you just need to connect anything that can connect via Ethernet to the router to verify that you can even get an Ethernet connection from the router. Network troubleshooting starts at the hardware level. What have you done to verify that the cables are working cables? Have the powerlinks been working with other devices so that you know for a fact that they are not the issue? You have to rule out all the hardware issues before you even look at software.
techsupport
https://www.hdmi.org/manufacturer/hdmi_2_1/ >HDMI® Specification 2.1 is the most recent update of the HDMI specification and supports a range of higher video resolutions and refresh rates including 8K60 and 4K120, and resolutions up to 10K. Dynamic HDR formats are also supported, and bandwidth capability is increased up to 48Gbps Older cables cannot drive 48G. You cannot run 4K at 120 fps, which is required for 3D 4K. I know that for a fact because I had to buy a new cable for my PS4 Pro. But you'd rather deny what I'm saying without any proof than believe the people who make the specifications.
techsupport
>You do not need a new HDMI cable for Ultra HD 4K **(probably).** > Update 1/2017: A new version of the HDMI spec has come out, called HDMI 2.1. For most people it's way beyond what they'll need at home, but there are some new features and a new cable type that's good info to know. The article just disproved itself twice. Older cables CANNOT drive 48G, it's a fact. Yes, most people think getting an expensive cable with weird certifications will improve their connection, and that's just plain wrong. The fact remains, not all cables support the newest HDMI features.
techsupport
1.2/1.3/... are specification versions. They're revisions of the standard, not cable "versions". They dictate how the format works for specific purposes. A device has to implement the latest revision to use those features. The cable itself is just an interface + copper wire, and the interface itself has not changed AFAIK (even if it has, it's retrocompatible) . Nevertheless, the cables that were produced before certain revisions of the HDMI standard do NOT support 48Gbps transmission, so you cannot use them with 4K120.
techsupport
> I've only ever recieved these notifications on this phone. I suspect that it's because this phone is the only one where the security holes have been patched. I think Snapchat has *always* been doing this, but with your new phone, you're being alerted when it happens. Kind of like Facebook, and their app that uses the microphone to eavesdrop on users. There's very little you can do. * Write a message to snapchat, give the app a 1-star review and then: * Stop using it * Keep using it.
techsupport
>Kind of like Facebook, and their app that uses the microphone to eavesdrop on users. As much as I detest Facebook, and recommend that everybody stops using it; this is a complete myth. Snapchat isn't secretly using your camera without your permission, either. The app simply takes a little longer to close than usual, so remains running for a short time once you exit it. This is all the phone is reporting. It's an enhanced security feature that people are misinterpreting.
techsupport
Or I'm on my way to work, and I'd rather spend my time relaxing than appeasing the temperament of some random Reddit techlad who's convinced themselves that Snapchat, Facebook et al are accessing our HID's remotely. But, I guess if you're *that* certain of the nefarious intentions of Snapchat/Facebook/Instagram/etc, then your only option is to delete your accounts and never use these services again. On you go then. There's a good boy!
techsupport
I think you need to figure out if it's the problem exists within the XP image or if it's the VM host compatibility to XP. Do you have Windows XP installation media? Can you confirm your VM can run Windows XP by installing XP to a blank / clean virtual disk? If it works, then we can look at how you're making your VHD. If it doesn't work, then we can troubleshoot your VM platform more.
techsupport
I've already sent it back one because the ssd pooped and we thought it was the mobo. I can't go another 3 weeks without my computer right now as I'm at Uni. I love the laptop except for what you just said about the portability for cooling. I have to undervolt, cut off the turbo, cooling pad, and repaste to keep it below 90c. But the portability and performance is unmatched. It has a 1070 and it kicks butt.
techsupport
The details in the dump files for that particular bugcheck code can often be helpful. If any .dmp files exist in **\Windows\minidump** we can try a manual analysis. If you'd like someone here to do a manual analysis you can upload the minidump folder and its contents to a cloud drive or file sharing service and post a download link here. You'll probably need to copy (not move) the minidump folder and its contents to your Desktop and work with the copy to avoid file permission issues.
techsupport
We might be able to get some more useful information from the dump file but 0x133 bugchecks with a 1st parameter of 0x1 usually don't pinpoint the driver or device causing the timeout. 0x1 means it's a cumulative system timeout rather than a particular driver timeout. I'd likely be able to see what drivers your system is loading, whether or not you're using the latest BIOS for your motherboard, and whether or not anything is overclocked (intentionally or unintentionally.) The instructions in my original reply are what I'd recommend doing to make the dump file available.
techsupport
Thank you, I was able to get the dump file. It is a 0x133 bugcheck with the first parameter of 0x1. The automated analysis is blaming the Nvidia driver as the culprit but I don't think it actually is the problem. Per the Microsoft documentation, the only real way to figure out who is to blame is by doing an event trace which we may need to do eventually. However, I can see that you are running a very outdated motherboard BIOS. According to the dump you're currently using the 1.70 version (2015/09/23) and the latest is 7.20 (2018/03/13) which you can see [here](http://www.asrock.com/mb/Intel/B150%20Pro4D3/#BIOS). I prefer to be running the latest BIOS on my own systems simply to have the system as stable as the manufacturer have been able to make it. Is that something you're comfortable doing? edit: Are you actually using an Nvidia GPU? The link to your build suggests you have an AMD GPU.
techsupport
It's certainly possible and where I would start. There are some problems which can only be fixed by a BIOS update and scheduling/timing issues are among them. ~~I'd suggest using the Instant Flash method which would involve downloading the update, unzipping the contents to an empty flash drive, and then using the Instant Flash tool in the BIOS settings menus to perform the update.~~ edit: The documentation for the Instant Flash options seems to suggest the update can be downloaded from within the BIOS settings menus as long as you have an Internet connection. I'd want to do the update from within the BIOS menus if possible as that seems to be the most reliable method.
techsupport
Let it dry for days. Numerous days. Air flow, warmish air. Rice is a joke. It has no real dessicant properties. Go to a pharmacy or craft shop, and buy silica gel if you want (or the dehumidifying stuff for your closet.) Seal it in a bag with some. Or just natural air dry. Shake it, tip it, in case water did get inside. Have a folder on your computer ready to copy to; you may not have long. If it appears ok, use it as second/unimportant backup, because you can't really trust it now.
techsupport
Pagefile is not for power loss. It's overflow for the memory system. When you have a lot of programs and files open, you can have a higher requirement for memory than you have memory. The least used pages are stored on the pagefrile to make room for more active pages. When the machine needs one of these pages it's brought back into memory and something else is swapped out. This technique is called [virtual memory.](https://en.wikipedia.org/wiki/Virtual_memory)
techsupport
As /u/AttackTribble said, `pagefile.sys` is for virtual memory. It is for when you accidentally overflow the amount of memory you have. By default Windows sets to be dynamic in side which can cause it to grow out of control if you have limited disk space. If you have more then ~4GB of RAM, you can safely set this to a static size of 4GB, otherwise set it to the amount of RAM you have. `hiberfil.sys` exists because you have Hibernation turned on. Hibernation is a feature on Windows that essentially allows Windows to dump everything in RAM to disk (into the `hiberfil.sys` file) and then shut down. RAM is a volatile memory storage medium, which means it is erased when it loses power. So this allows Windows to save its current state and completely shutoff power. You can use Sleep mode which is a low power mode, but still uses power, or just shutdown all the time. If you do not want Hibernation and want to reclaim the space used by `hiberfil.sys`, you can open an Administrator CMD prompt and run the following command powercfg -h off
techsupport
> By default Windows sets to be dynamic in side which can cause it to grow out of control if you have limited disk space. If you have more then ~4GB of RAM, you can safely set this to a static size of 4GB, otherwise set it to the amount of RAM you have. TBH, it's really dumb to mess with your vm settings, just let windows manage it. It's extra dumb to just set it to a set value of 4gb or to set it equal to the amount of ram you have because some random dude on the internet told you that was a good value.
techsupport
Back during Win9x up to XP, keeping your Virtual Memory to 1.5 times your RAM resulted in substantial performance gains. This was to reduce the amount of times Windows needed to pull from Virtual Memory on what was typically a 5400 RPM hard drive transferring at about 100MB/sec That being said, you are right that it is silly to mess with Virtual Memory settings for marginal performance gains now, especially with solid state drives that are far faster than mechanical drives, as well as the OS able to process data faster thanks to multi core processors.
techsupport
> Back during Win9x up to XP, keeping your Virtual Memory to 1.5 times your RAM resulted in substantial performance gains. I remember those days. I also know that now just about anything a random person does to vm settings based upon random advice from the internet is almost always universally worse than just leaving it setup to be managed dynamically by windows. In the XP days, Windows was a little more conservative with using space because drives were smaller, so while the dynamic generally worked out OK, going ahead and telling windows to use several gigs of your space for swapping worked out better. Nowadays, Windows has no problem using up your harddrive, so people are trying to solve the opposite problem by telling it to use less, and yet also think that will make things faster somehow. The people that turn off swapping altogether suffer from a different problem in that virtual memory works somewhat differently than they think, so they still aren't getting any of the gains that they imagine by trying to force windows to not swap at all. As someone with a CS background and a tech support background, its super frustrating to read some of the misguided advice in this sub.
techsupport
This "some random dude" has been building computers for almost a decade and has a degree in Computer Science so it is not just dumb estimation or guess. I know how virtual memory works. The pagefile/swap storage is a way to make sure you OS does not completely crash when you run out of memory. Ideally the amount of space you need for pagefile/swap storage would vary from person to to person and use case to use case. Ideally, if you want to optimize your pagefile the best you can, you would want to know the total possible memory your system is going to use at once, add some padding to that, subtract how much RAM you have and that is the size of your page file (https://blogs.technet.microsoft.com/motiba/2015/10/15/page-file-the-definitive-guide/). *If* you have a HDD or high capacity (512GB/1TB) SSD, then yeah, you should not mess with your pagefile, just let Windows manage it. That is what I do on my desktop at home. The issue is that a lot of Windows laptops often come with 8/16GB of RAM and only a 256GB SSD or even 128GB just to say it has an SSD. 128/256GB does not go very far on a modern system. Windows + a couple of games + Microsoft Office + some pictures, movies and that space is gone. Often times, consumer laptops have more memory than most people need and less storage space than people need so it is not the worse thing in the world to statically set your pagefile size. A consumer laptop that is not running VMs, server applications or any other large memory intensive tasks will never overflow your virtual memory space faster than your OS can yell at you for running out of memory. 4GB is a reason balance between having a decently sized page file to give Windows time to react to running out of memory and at the same time not consuming your whole 256GB SSD. This gives you 12 GB virtual memory (on 8GB systems) or 20 GB virtual memory (on 16 GB systems). My Surface Book follows into this bucket (8GB of RAM, 256GB SSD), and a 4GB pagefile is perfect for my Surface Book.
techsupport
No, that's not what a pagefile is. It is part of the computer's memory system, and is usually in use all the time, not just when the power is off. Here's an analogy. Say you're working at your desk, which can hold eight pieces of paper or eight books at one time. When your desk fills up, the next time you need to look at something, you first need to move a piece of paper off of your desk. You pick a document that you probably won't need for a while and put it in a filing cabinet. A page file is something like a filing cabinet. [Moderately geeky explanation](https://www.howtogeek.com/126430/htg-explains-what-is-the-windows-page-file-and-should-you-disable-it/)
techsupport
Try that on a 60Gb boot drive... Also, on mechanical drives it is STILL better to set a fixed swap amount. When the queue of your drive is full the last thing that you want is to resize a big file, specially if the drive is close to being full. It's fine to manage a crapload of servers, so you're surely aware that some big ass setups would do perfectly fine on a 60Gb boot drive. Windows though? Nope. **EDIT:** Thanks for the downvote. :)
techsupport
VMs are another story, RAM aside. To keep it short, an SSD's performance will suffer the closer it gets to being full, and 60Gb ain't enough for windows, it will fill up with temp files and updates before you realize. 60Gb simply isn't enough for a full win 10 install, let alone an enterprise. And i'm not bashing microsoft just because, i use windows myself, but with enterprise level hardware? Why? EDIT: Just to be clear... a 60GB **parition** on an SSD is not the same as a physical SSD itself.
techsupport
Okay, I had chilled a while on the comp, then stutters came back when I was ingame again, I closed all programs and then opened them back up one by one and i'm pretty sure it reacts on my chrome tab with netflix up because each time I open it I get stutters and each time I close it it's smooth, gonna try another browser. EDIT: seems to be because of chrome + netflix, doesn't stutter with Edge.
techsupport
Usually it's written on the motherboard itself, but GPU might be blocking the part of the motherboard that has the text. But you probably have P6T if that was the BIOS it allowed to flash. But did you change BIOS boot settings after the update? For example you might have had either old style IDE disk management enabled instead of newer AHCI. Or even RAID mode. Those aren't compatible with each other.
techsupport
That’s probably the problem, chances are another device has the same IP or is trying to use it, please leave it dynamic on the computer (in control panel) and restart, whatever IP your computer gets then keep it, it means the router has it free then go on your router and set your devices IP address to reserved, then the router will associate the port with your IP Internally and it will never change.
programming
There's a hotkey that will print random Bible verses. Along with religious themed games. There's a whole bunch of random games built-in actually, quite interesting stuff, and remarkable when you consider it's all one bloke doing it alone. He got banned from here long ago but Terry's threads were interesting from a purely technical point of view and sparked healthy debate regarding how to deal with his outbursts/behaviour which obviously was quite offensive at times. I remember people praising him and saying nice words only to get a reply that was nothing short of outright racist contempt and somewhat scary. He was a complicated person. https://www.reddit.com/r/programming/comments/38u4zc/flight_simulator_and_first_person_shooter_in/
programming
Hardly true. Modern systems just have like thousand times more stuff than Terry's, and they support multiple programming languages out of the box, and have like zillion times more applications which are actually good for something rather than just childish toys. And some of the more serious stuff he likes to talk about, like the *compiler* called HolyC, is also worthless. It's just a lexer married directly to a code emitter, just going straight to spewing out assembly for each lexed token. He remarked somewhere that he had to change C grammar a little to make it work, like casts are written after expressions so that you can just spew the assembly that does the cast after expression is evaluated. It's all done with number constants that represent the opcodes put straight in the program source code, `*dst++ = 0x12; *dst++ = 0x34;` style. And of course, since there's no AST, no inlining, etc. the performance of HolyC is going to be terrible. I don't think you can even call it a JIT, because it's too primitive for that. It's really just a fast single-tier AOT compiler that eagerly compiles any statements it ever sees and dumps them into a global namespace. I don't understand why anyone thinks TempleOS is worth anything. Nothing in it will be adopted by anyone else. It's literally like going back 30 years in time in terms of programming in both good and bad. Good in that it's simple, alright, but bad in that it's clearly insufficient basis for building actual applications. The low audio & video standard it imposes on itself already kills the operating system for any practical purpose. It's literally all because Terry regressed as his illness took over and went back to his fond memories of being at school learning assembly and hacking the Commodore 64 and consequently imagined it was god telling him that this was the correct, holy way to do things.
programming
Give me Puritans, for sure. They believe I'm going to hell -- which means I'm not their problem, long-term. They believe in redemption -- which means that I can apologize and at least potentially *have my apology accepted*. They believe in argumentation[0] to the point that they think any slob with a bible can argue on an even ground with a thousands-of-years-old established authority. Meanwhile, SJWs think there are things I can't argue about (not *oughtn't*, which is bad enough, but *can't* because by nature I am *incapable*) based on my sex and skin color. Not only will they never forgive my slightest offense against their cause, but they'll even try to fuck over my children for it, decades later. Even a few minutes alone with an SJW is enough for a normal person to become "a Nazi" who "should be punched" and "not given a platform". Most importantly, Puritans had an entire world full of clear, known adversaries, so they didn't need to dig around looking for enemies under every rock. SJWs have positions like "uh, good things are good?", "hate is rilly rilly bad", "you should be nice to people", "if someone says he wants to commit genocide, that guy, he's not a good guy" -- which has basically **no opposition anywhere on Earth**. Puritans have the devil. SJWs do not have a Shitlord King somewhere. Neither do they have the Shitlord King's four heavenly generals to fight. Nor their eight celestial henchmen. Nor their sixteen vile assassins. Nor--anything. There's no Shitlord Army. Even the small number of people who will cop to positions like "Nazis? Yeah I read a book, it sounds interesting", or "Racism? well if 'racism' means 'wanting picket fences and peaceful neighborhoods', then yeah I'm racist af" -- even these guys are few enough, or in hiding enough, that SJWs feast on their own most of the time. They're just *beasts*. 0\. I wish I would never have to say this, because it implies that people *don't*, which is far more of an absurd position than any position on the afterlife. It's at least not a performative contradiction to speculate about the afterlife.
programming
I really do believe something along these lines could work and be a positive influence. Can you imagine writing serious software for a corporation whose legal department forbids you from using almost any software released or licensed in the last N years? Say because the company is a heavy weapons manufacturer, or uses slave labor, or is a government contractor working on mass surveillance tools? What if even operating systems and database software and compilers for common languages were subject to similar, enforceable restrictions? These companies would barely be able to function. But this license in the post is too zealous. Alcohol and pornography? I don't see how these things are harmful. It sounds more like Christian morality than a genuine attempt at ethics in software. Something like this would need to be the minimum of what everyone can agree is unethical, not the maximum of what anyone might believe is unethical. Murder, injury, abuse. Not fucking beverages. There's even an apparently serious discussion taking place in the issues for this Do No Harm license about banning processed food companies. It's impossible to take this seriously.
programming
> But this license in the post is too zealous. Alcohol and pornography? I don't see how these things are harmful. Yes, well that's the key to it all isn't it? _You_ don't see them as harmful and so you think it's wrong to ban them. These people do see them as harmful and so think it's right to ban them. Personally, I don't want either of you to have any say whatsoever in who gets to use what software. Because I don't trust either of your sense of what's just and unjust, it'll just end up with you trying to force personal opinions onto everyone else.
programming
I appreciate your attempt. But if any point in the license even has to be opened to discussion, then I think that including it would doom the license to failure. I really encourage you to stick with the obvious things. Use restraint. Stick to obviously, directly harmful actions. Some very valuable software advice applies here: Limit the scope of your project. Also, consult a lawyer. Preferably a slew of lawyers. I could write up some fantasy ethical-use license, too, but I'm not a lawyer and I don't know how to write a license that makes any legal sense. Later, after the first license targeting directly and indisputably harmful actions finds any kind of acceptance and actual use among developers, which will be a challenge in itself, then maybe people will be ready to consider taking another step and targeting actions that cause indirect harm... Though, honestly, I don't think you could ever sell me, personally, on a license that targets things like gambling or pornography or alcohol. While they certainly can cause harm, the participants are consenting adults. As a consenting adult, I _like_ drinking and gambling in moderation. I wouldn't want to make my software inaccessible to these industries, and I would be annoyed at developers who did so. I don't blame alcohol manufacturers for drunks; I blame drunks for drunks. I don't care as much for pornography, but I think a lot of sex workers would be pissed off if developers started adopting a software license that interfered with software or websites they use and made their lives more difficult.
programming
But that's exactly it. You criticized the implementation when the problem is with the concept itself. Your criticism was that they were against different things than what you're against. The problem is that _everyone_ is against different things than you're against to some degree or another. It's trivially easy to image a similar implementation that bans the use of a software by companies that support gay people, pushed just as seriously and with _the exact same sense of self-justification_ as yours supporting a version that bans war profiteers. Edit: for posterity, this was the response he PM'd me. I'll let him have his last word in public. >Because banning gay rights would fall under the category of targeting "only things that everyone can agree are unethical". Sure. >At first I convinced myself that you weren't arguing in bad faith. Being a dumbass for the sake of getting a rise out of people who give a shit. It didn't look like a habit when I checked in suspicion, and most people who do so do almost nothing else. >But I don't believe that anymore. You aren't this stupid. Grow up and find less pathetic things to do than wasting the time of people who are trying to do something positive. >Or maybe you are this stupid. Maybe you'd have to be, anyway, to choose to be such a waste of air. Either way, you aren't worth any more of my time. You aren't worth anybody's time. Bye.
programming
Yes, but you will likely see better performance from a switch, where the compiler can use its heuristics to determine the optimal look-up strategy. With more complex sets like MIPS which require masking and operand analysis to determine the instruction, a switch is far faster than the alternative. I'm also not used to seeing the term 'native code' used in a low-level discussion. That is usually reserved for high-level languages. By definition, *any* valid sequence results in valid machine code - otherwise you aren't compiling. However, array-of-function-pointers isn't always the fastest solution. O(log n) can be faster than O(1), as Big O notation doesn't reflect speed, but scalability.
programming
They're aware of it, and in many cases the compiler will *not* emit a jump table. It depends on the data and the context. Are you seriously implying that on x86, 1 instruction is always faster than 2? Because I can trivially show situations where that is not the case, as not only do instructions have different base costs on x86, but they don't have constant costs due to the fact that the execution rate is dependent on a *ton* of factors. Heck, if some of the functions it would call would be faster inlined, you've eliminated the compiler's ability to do that if you use a jump table. The core rule when writing code is *don't try to be smarter than the compiler*. Past that, jump tables aren't always appropriate (nor can they always work). Allowing it to inline would probably be the fastest approach, particularly if the switch is the *only* place calling these functions. Forcing inlining would generate pretty fast code.
programming
„he core rule when writing code is don't try to be smarter than the compiler.” So why are we optimizing the code? „Are you seriously implying that on x86, 1 instruction is always faster than 2?” I know „Heck, if some of the functions it would call would be faster inlined, you've eliminated the compiler's ability to do that if you use a jump table” Bullshit, you cannot inline indexed switch case without JMP instruction(compiler have to use at least one short JMP). For better code maintability it is also better to avoid huge switch statements
programming
I've now discarded two lengthy responses due to just getting cranky about this. I've resigned myself to the apparent reality. My job is to make code that makes money. If a quality of the code cannot contribute directly to the bottom line, I will not get reasonable traction on it and will ultimately be frustrated. Business just doesn't care. Some would argue business shouldn't care, and that has some merit, but is, I think, highly subjective. What I wouldn't give for a job with documented use cases, intelligence about how customers actually use the product in the wild, and a strategic plan to keep that product fresh and relative for the foreseeable future. Instead, I'll just be over here working around the abandoned products intermingled with the abandoned refactors, all living in this monolith that appears to function on inertia alone. Damn, it's depressing.
programming
>Do you think making software that makes more money now, but looses money in the future is better than one that makes little money now, but sets up possibilities for more money in the future? * *Making software that makes little money now* -> you might not survive or somebody else might overtake your market share, thus you may never get to that hypothetical future where you make more money. * *making software that makes more money now, but looses money in the future* -> your are buying yourself a fighting chance and with more money now, give you a chance to pivot when necessary or hopefully, spend the resources required to pay that tech debt in the future and avoid the hypothetical future of "losing money" Does NOT mean you should get away by writing shitty software and create an Everest of tech debt.
programming
> Is it like "throw all crappy code out and start again" Yes, this is a possibility although it would be more gradual than immediately starting from scratch. > "use our existing tech to do something new". No, is actually was thinking about pivot on the business sense. Realize things are not working well, so you need to change a lot of things in your feature & functionality need to be thrown away & rewritten. Yes, you need some good base code for things that you keep and reuse. But the ones that are thrown away will be pointless. You do need a balance.
programming
> I've resigned myself to the apparent reality. My job is to make code that makes money. I've worked in different domains (applied science, industrial services, industrial components, libraries for web development). What I can say is that the balance of different qualities depends greatly on the environment in question. There are environments which require a lot more quality than others. And one caveat: Making high-quality software can be boring at times. Yes one might use automated testing but it is the developer which needs to write all these tests.
programming
Maybe you should consider working for a startup? Your code has a real impact on the bottom line. >What I wouldn't give for a job with documented use cases, intelligence about how customers actually use the product in the wild, and a strategic plan to keep that product fresh and relative for the foreseeable future. As an early engineer at a company, you can help set the precedent for this moving forward. Although startups can often push for deadlines, you can push back to keep your code quality high and well documented.
programming
In my experience, the people who are good at thinking up product and hacking out a first-to-market solution are not the same kind of people who are good at making reliable high quality software. This is fine and I accept that. What I struggle with is that the former don't seem to know this or understand where they fall in this grouping. And, since they tend to be founder types, and are often still in control some years down the road, that same mentality is deeply engrained in the organization and there's little hope of turning it around.
programming
> What I wouldn't give for a job with documented use cases, intelligence about how customers actually use the product in the wild, and a strategic plan to keep that product fresh and relative for the foreseeable future. Let me inform you that users typically dread product updates. They don't want their tools to be "fresh", they want them to be familiar. They don't care about new whiz bang features in their software; even if its clunky and slow, they prefer the process they have mastered rather than having to learn something new.
programming
I'm going to jump far outside software development for the moment. What would you do if you could borrow an unlimited amount of money, on a brain-dead idea, at effectively zero interest rate (still true after adjusting for inflation), via a corporate shell (so you could capture all the profits but externalize all the risk)? Why, [pets.com](https://pets.com), of course! ([https://en.wikipedia.org/wiki/Pets.com](https://en.wikipedia.org/wiki/Pets.com)) ​ Artificially low interest rates have produced a cascade of bubbles in which business models and products which could never see the light of day, or would be quickly bankrupted out of existence--taking "management" to ridicule and unemployment...likely for the rest of their lives. As Warren Buffet likes to say "It is only when the tide goes out that you discover who has been swimming naked". ​ Anyway, wonders of the internet or not, we are also living through a complex series of bubbles (only some of them directly economic).... In the business environment for at least 20 years actual "saving" has been a complete loser strategy, and even investing based on careful analysis and testing of market needs and real profitability has not been able to keep up. People have been FORCED to speculate and, as the more conservative speculations dried up, the speculation has become more wild. ​ In software we are seeing this as throw crap at the wall and if it sticks at all "ship it!" No matter that, over longer time frames, it costs far, far more to maintain software than to hack it out - and that some of the crap being churned out is going to harm or kill real people. User interfaces are being gratuitously being changed for the sake of "new" but with no real additional functionality anyone much really wants; Tesla becomes worth more than all the car companies in the world (as if there were barriers to entry which Toyota or BWM couldn't - and won't - surmount with near trivial ease when they decide to do so); and United punches out, while Wells Fargo simply steals, from its customers to net impunity. ​ The current era of morbidly obese software and systems which cannot be comprehended by ANYONE (see [https://medium.com/message/everything-is-broken-81e5f33a24e1](https://medium.com/message/everything-is-broken-81e5f33a24e1) for a check up from the neck up) WILL pass. The trick is to survive it with decent development skills and some perspective and honor intact. I won't claim that is trivially easy.
programming
> Does NOT mean you should get away by writing shitty software and create an Everest of tech debt. *triggered* This is becoming a serious frustration at my current position. We've got a 13-year-old codebase, most of which lives in a single repo that covers way too much ground. And I run the team that ostensibly owns this codebase. The problem is, the people who wrote the majority of this monolith are now the executives of a fast-growing company, so their expertise is mostly inaccessible. Instead, as I keep hiring people for my team, I have more and more people writing super defensive code, because it is hilariously easy to accidentally break a feature you didn't know existed. Hell, I've been here five years and I still routinely discover new features. The frustration is that I can't seem to get traction for taking these problems seriously. Every time I bring this topic up with my VP, I get some version of "well we can't just stop and rebuild everything" or "what specifically would you change?" Not that these aren't valid points, but I'm not trying to propose specific action, I just want some recognition that this situation is not good, and will only get worse as we keep growing. Maybe if we can agree on *that* point, then we can start figuring out what to do about it. But instead, I just get an endless stream of feature requests conjured up by internal people based on what they think customers want, sometimes, in direct contravention to actual user research we've done. And when I'm not getting those, I'm getting sporadic complaint emails from the CEO about how someone on my team "reinvented the wheel." Sometimes it's because they didn't know the wheel existed in our enormous codebase, other times it's because the wheel the CEO wrote ten years ago is wobbly and does four or five magic things that don't apply to the use case we had, and the developer was too scared to try and refactor. Maybe my problem is that I care too much. For the moment I'm going to continue vocally complaining about the problems, especially when one of my devs gets unfairly targeted when really he's a victim of this kludgebase, while pushing things out the door, but maybe I'll just give up on that and go with the flow. More likely I'll go somewhere else. It's been a good run, and I've enjoyed working here, but lately, I dunno. I used to brag about working here, now I complain about it.
programming
> even if its clunky and slow, they prefer the process they have mastered rather than having to learn something new. That is not true. If the new software is actually more performant the will like it. This can be possible keeping the UI the same If the UI is slow or part of the slowness or needs an update, as long as it offers the same functionalities at the same place and the app is faster / more responsive then users won't mind a UI change What they do mind is a UI change just for the sake of it and in many cases making the app slower and harder to use. The office ribbon is probably one of the few exceptions that in the long run was better than the old stuff and users simply complained because it worked differently. People adjusted quickly and the complaining died down rather fast. In contrast Windows 8 was a complete disaster making it different and worse. Harder to use and slower to use.
programming
Having been developer #2, #3, and #5 and a succession of startups, I don’t think this is true. If I ever want to do any code base health work with an eye to long-term maintainability, it must be justified in terms of immediate business needs or benefits. That’s typically easier with web because any reduction in bloat cuts down on load time, which helps with SEO,^1 but there you’ve also got a lot of technology churn. Part of the problem is that managers unfamiliar with tech will ask why you didn’t do it right the first time (I did, given the requirements of the time, but that answer gets old after a while), but more often it’s just a question of priorities. With startups especially keeping up with or getting ahead of the rest of the market is crucial, and even when your boss understands tge importance of code quality, those improvements will be deferred indefinitely because new features take priority. ^1 Don’t tell me if this isn’t true or overstated. It’s the only excuse for refactoring I have left that gets any traction with management.
programming
Just on your first point, I think the distinction is between business-person and professional. The business person sees technical work as a means to an end (e.g. valuable benefits, in the opinion of customers). A professional see technical work as valuable in itself. These might also be characterised as marketing vs engineering. You need both. Honestly, if you look at the merits, the business person is actually doing good in the world, whereas the professional is just obsessed with some abstract, impersonal notion of "good". To your second point, over time, one expects code quality to become more important... but it really depends on how settled the business and business environment really is. Typically, businesses do not last forever, and it's because they don't adapt. That is, a focus on what people actually want is crucial. However, I agree that it does shift somewhat towards code quality, and founders might not shift enough.
programming
Depends on your available funds and the market. As a startup yes you need to release it and get some income and market adoption. As a big company entering an existing or new market, it's better to think about maintenance cost up front also because you already have experience in it and the funds. Plus a shitty product in a new market can impact your brand in your existing market. In the end the best way would be some form of risk balance but because that is hard to do, bean counters just look at income and hence you will always have to delivery quickly and it's always the "Quality" aspect of the project management triangle that loses out. And no, this also applies to stuff were lives are at stake. Just see the tesla model 3 braking issue. If a software update can reduce your braking distance greatly, then you clearly risked lives with shitty software.
programming
Like you said, the software is just a product, same as physical products, and none of those are perfect in every aspect. You always want to make great product, but reality is, it is just a product, and most likely your software needs to be equalent of mcdonalds vommit triggering burger, rather than 100$+ steak from fresh, high quality products at top end restaurant. Same for software. I only wonder, how people, who create physical products, feel about their products being shit quality/knowing about their defects/bugs/etc.
programming
There’s truth to what you say. At the same time, I want to push back a bit, because software and technology are also *immense* drivers of efficiency gains. We joke about disruption but so many conveniences of modern life are driven by someone having a software idea, which changes everything. Investors maybor may not be better off by putting money into 10 or 20 startups hoping they hit it big once, but the economy over all certainly is.
programming
I understand your frustration. How about coming up with concrete proposals to fix areas you are not happy about? It's much easier for a VP to give you time and resources when you have a concrete plan, that includes the costs and the benefits. I know this hard to do in software, but it might be easier if you pivot this from a business point of view. Give them a choice they can say yes/no to. Hell, even better, give them multiple choices (cheap, somewhat cheap, expensive) and make the "somewhat cheap" option the most attractive. I am not trying to downplay your frustration, I am merely pointing out that your boss's are no longer developers and are properly approaching things from a business point of view. I'd guess they expect you to come up with a plan. From my experience; they don't want to "figure it out" and "come up with a solution", they want you to propose a solution and then agree to it (or not, but then at least you tried). EDIT: start small, fixing these kind of long term issues in a massive code base is tough. go for fixes that are cheap but have a high impact.
programming
Yeah, it's hard for me because when I started here, it was kind of a dream job. I'd gotten burned out on dev work in general and this place let me do dev work that was related to a hobby/passion of mine. And as a bonus, it was frankly a great software shop, too - I had a lot of leeway to propose and implement improvements and refactors, I got regular raises and bonuses - in short, I felt like I was contributing cool things to a larger cool thing, and getting recognition for it, both in the sort of emotional kudos sense and in the bank account sense. I still like the company and its business. I still like a lot of the people I work with/for. But the last year or so, I feel like things have changed, and not for the better. I feel like I'm less of a contributor and more of a trained monkey - "the business wants this, we're going to do this." My gripes about our codebase are a prime example. I've stayed here far longer than any other dev job, for all the good reasons stated above. I do feel invested in the company, and I do want to keep making things better, but eventually I'm going to get tired of supposedly being a technical expert and team lead, but practically getting shut down when I advocate from those perspectives.
programming
The problem there is that even I don't understand the big picture here. I feel like it would be myopic for me to propose solutions which, in the real world, affect teams other than my own, and probably in ways that aren't obvious to me. The basic thing here is that while my team is ostensibly responsible for web-facing stuff, part of the codebase we own is core to the business and affects other teams as well. Basically, right now I feel like I'm a sort of "unknown unknowns" spot when it comes to figuring out a path forward, and I've got so much pressure on my team and I that the idea of having time to get with the other team leads and hash out next steps seems impossible. And it's hard for me to put a lot of effort into that when I don't even feel like there's agreement that there's a problem here. All that said, I *do* think that I need to do a better job of forming cohesive arguments, both in terms of describing the problems I see and some semblance of a strategic path forward. I think I take for granted a little too much that my VP will agree with my technical observations, since he came up in the org the same way I did. I guess what bugs me is not that the business doesn't just do what I say we should do, but that I feel like my concerns are dismissed outright with no real consideration. I know good and well I'm regularly not going to get what I want, but the way these conversations are going lately, I don't feel like my input is valued at all.
programming
From the medium article you linked: >Every malware expert I know has lost track of what some file is, clicked on it to see, and then realized they’d executed some malware they were supposed to be examining. I've done this, though I'm no expert by any stretch of the imagination. The day NIMDA struck I had Apache, so it wasn't a real issue for me. Except that after seeing logs being hammered I visited one of the sites it was coming from. Malware was a lot easier to manual clean up back then.
programming
Getting a feature out the door a day or two faster won't make or break the startup. Those deadlines are determined by management, but as an early engineer, you often have just as much say as they do. Just stand firm when it comes to needing to keep code quality high, or better yet, don't consult them at all about any refactoring needs (if they're not too severe) - just incorporate it into your feature implementation estimations from the get go. Keeping code quality high directly translates to fewer bugs and faster development time in the future (sometimes by multiples of 3 or more), so severely sacrificing code quality in order to get a feature out is almost never worth it (or if you're forced to, it should be made a priority to refactor the implementation immediately afterwards). If the codebase is already in a great state, it's a lot easier keeping it that way as well for your team.
programming
> I feel like it would be myopic for me to propose solutions which, in the real world, affect teams other than my own, and probably in ways that aren't obvious to me. The basic thing here is that while my team is ostensibly responsible for web-facing stuff, part of the codebase we own is core to the business and affects other teams as well. As an outsider, to the whole thing. Can you propose a solution to split what effects others? Kinda like a system where the frontend and backend is intertwined, a split between frontend and backend, since different teams are working on it anyway.
programming
Start your own software company. Then you’ll have an entirely different perspective. Or just pretend like you own the company you work for. It is very easy for us developers to get on our high horses about all the things we think matter. But when you strip away all the bullshit and are just left with the question: how does this help us make payroll? Things become much clearer. It’s also interesting how few developers are willing to die on the security hill but will wail about documented requirements or time for refactoring etc. Anyway, nothing will make you appreciate the relative value of your favorite developer priority like trying to cash flow it.
programming
What I find interesting is the unsolved main issues that are exposed over time. A great current example is a whole raft of race conditions in one component that have been there since the beginning, but have been exposed and widened by changes in performance of another component. Now, tasked with cleaning up this mess, the tests and documentation would do a lot to tell us what the underlying intent was, what the expected behavior is, etc. Instead, we have to do our best and hope we don't break a whole ton of customers. The original authors believed they had solved the main issues, but in fact hadn't. So it goes.
programming
>Just stand firm when it comes to needing to keep code quality high, or better yet, don't consult them at all about any refactoring needs (if they're not too severe) - just incorporate it into your feature implementation estimations from the get go. This is exactly what I do as well. No one asks a chef to skip cleaning his workplace because there are tables waiting. A clean kitchen is part of their process. It's not an option but a requirement. Stakeholders want developers to cut corners because some inexperienced developer once told them that it is possible to cut corners without any problems. Let that not be an option in first place. Having a clean codebase with well thought out variable names and properly abstracted class hierarchies are definitely giving me, a developer, less grey hair. And that is actually helping me be more productive. That's all the reasoning and justification I need.
programming
This echoes my experience pretty deeply. Do we work together? Probably not, we've only been at this for 10 years, not 13... lol. I'm lucky in that my immediate manager understands -- he believes we should be investing in improving the code so that we become more flexible, can deliver faster, etc. and he's seen it work in this code base. It helps that there are *so* many field escalations that we really can get some traction for fixing things, and we work in dead code deletion and refactors as part of that whenever we can. But we're fighting an uphill battle.
programming
Once the speculation ends, there will be no replacement for it. I've seen people turn recently away from established high-reliability methods to "we'll just use OSS stuff"[1]. They do this not because they think there's any productivity gain; they do it because it feels "more egalitarian". [1] you can do hi-rel with *some* OSS stuff, but not all. The vast thrust of OSS pays no attention at all to high-rel. Our whole discipline is at serious risk.
programming
As the pace of change accelerates, long term investment becomes less important (because less predictable). We might see a new domain of professionalism, at a higher level of abstraction in some sense, that can endure across at least some of these changes. And, in the race to the bottom, customers pay less for lower quality. This is ironically driven by lower wages, caused by concentration of wealth, won in the race to the bottom. *I'm here all week, try the lobster.*
programming
> What I wouldn't give for a job with documented use cases, intelligence about how customers actually use the product in the wild, and a strategic plan to keep that product fresh and relative for the foreseeable future. Or hell, even just a two-page vision outline or project definition. It's baffling how often we're set to work making something without anyone having a clear picture of what we are supposed to be making.
programming
> Maybe my problem is that I care too much. For the moment I'm going to continue vocally complaining about the problems Did that, got fired. Kept telling my boss that the code base is bad and needs to be refactored. People around me who were already working on it weren't bothered by it. They are all still there, I found a different job where they actually care about code quality. Win-win I guess.
programming
'It's not fair. When we screw up, it's fair, we have to fix it. But it feels less fair when we have to fix someone's else's problems."' Oh Linus, you're funny. There's only one fix for this. You should make your own open source hardware. You make some chips, spend your money on it and then when they are done and a minor flaw is found that you had the foresight to make fixable in software, I'm absolutely certain you'll decide 'No, such a thing would not be fair to make linux fix this. I will instead take all these chips back, build more and pay my own money to ship them back and forth so people can have working computers without linux having to lift a finger.' And don't forget to pony up for new motherboards in systems where the CPUs are soldered down.
programming
I'm guessing there'll be a ton of these counters, which help in maintaining the aircraft. Same in most Machinery, its an easy way to assess the various state of the machine components without shutting it down and opening it up. The push now is to dd machine learning on top of telemetry instead, so that parts can be maintained via predictive analysis. However, if we can't even prevent simple mistakes like this getting into live machines, we'll only be adding more complexity to a system we already can't manage.
programming
In an airplane you have a lot of time-dependent stuff - computations for velocity and radar, but also a host of devices and interfaces where you say, "If this doesn't respond within X amount of time or is giving garbage answers for at least Y amount of time, treat the device as defective and escalate the alert level". You use a separate, elapsed-time-only clock for that stuff because a regular, UTC-based internal clock may need to be reset or changed periodically. Allowing resets of the "wall time" clock means you can't guarantee that it's continuous and strictly monotonic, so for stuff that's sensitive to elapsed time but not wall time, you use a separate clock that does make those guarantees.
programming
I do a lot of work on industrial automation systems that have the dynamic duo of millisecond-level response times and 16-bit words. Counting every millisecond, you overflow a 16-bit counter in about a minute. And 32-bit math is available, but 32-bit timers are not, while 16-bit timers are dirt cheap. The typical response is that you make your counters resilient to overflow, or reset them when they are about to do so. If the problem occurs once a minute, you will experience quickly whether your overflow math works correctly or not, and be able to depend on it. 248 days is long enough that the authors could have shipped it with a broken overflow protection and forgotten to check that it worked.
programming
Nope. But they are powered down completely at the end of a sequence of flights. Most airports don’t have departures scheduled between 1am and 5am or so local time, so if an aircraft arrives at 1am they will park it and power it off until the next flight several hours later. Back on the other hand, the Dreamliner is a long distance aircraft that will often fly overnight across oceans, so it will often depart at 9pm and arrive at it’s destination at 6am local time, whereupon it will be turned around and fly another long distance flight. So in that case, it wouldn’t be powered off in between flights. But airliners need pretty constant maintenance. Again, that’s part of the reason that flying is so safe. But the 787 has exceptionally long maintenance intervals by design. I think the target 787 was something like 1000 hours of use between line checks. I don’t know what the maintenance interval is in practice, and different systems require different periodicity checks (I.e. an engine may be swapped in that requires a check every 1000 hours but when it was swapped in the engine had 500 hours on it and the airplane’s last check was only 200 hours ago... so that bird may get it’s next line check at 700 hours) ... but airlines do try to synchronize them. So it’s not unrealistic for the Dreamliner to hit this limit, but they aren’t rebooted between each flight. Unless it’s an Embrair. (That’s a pilot joke...)
programming
Meant what I said. It’s not unrealistic because the normal maintenance interval is more than the reboot time. The Dreamliner is a high hours, low cycle (aka long distance) airliner. If an airliner is on the same route flying across the Atlantic (I think JFK->Frankfurt is a 12 hour flight, for instance) and they turn the plane without rebooting at each end, then it would take ten days to hit this limit.
programming
1. Multiword arithmetic creates the possibility that you can update the low word of a set but not the high word (or vice-versa), and if another process reads that value in the mean time they get a bogus result. You can't just lock the other threads because that will cause potentially vital sensor information to get delayed or even lost. 2. In all likelihood, this clock runs on a simple digital accumulator, similar to most quartz watches, instead of a general-purpose CPU.
programming
So why not just have the counter reset once it hits the top value and calculate the difference between the two points? That's how we do it, and I work for an aerospace company. This will have been written up in a problem report and will be fixed when the next package deems it. Unless they realise that 248 says is an unachievable time between resets. Or of course, that the reset that does occur is not detrimental to flight. You can have in flight resets. That's why you have multiple channels.
programming
It's a question of semantics, really. Take GCC's `fwrapv` option, for example: it's not standard C, so we can call it C-with-GCC-extensions or C-with-overflow or OverflowC or even "G" ... with well-defined signed integer overflow. What's important is whether it's well-defined on the exact platform they're targeting. If they're targeting standard C? It's undefined. If they're targeting Ada? It's an error. If they're targeting a custom language that's effectively <standard language> + overflow extension? It's well-defined. Portable, standard C is important. But sometimes the nature of embedded programming means you have to use a platform-specific variant. I hope that's not the case for a safety-critical device... In the context of your original comment, it could even be raw assembly for whichever ISA, with well-defined overflow. Side note, even with Ada, apparently [non-conforming/non-standard compilers exist](https://softwareengineering.stackexchange.com/questions/324771/why-is-overflow-silently-allowed-in-ada) which will not check for overflow. I'd certainly not recommend relying on this behaviour, but it's there.
programming
Would you accept that [it's well-defined in C#](https://stackoverflow.com/questions/26225119/is-c-net-signed-integer-overflow-behavior-defined)? My point in both the original and followup comment is that there is no universal rule that signed overflow is undefined. Heck, it's definitely well-defined in x86 assembly, and almost certainly most others. At the end of the day, standard C is just one of the few languages that have arbitrarily declared it undefined *within that language* (and said declaration can be 'overridden' by the derivative language that's not-standard-C implementated by some compiler). In fact, "undefined behaviour" itself in this sense has absolutely no meaning outside of standard C (or a slightly-different meaning within standard C++). Because that phrase itself only has that meaning within the definition of the Standard. Even your Ada example is well-defined. An error condition, but well-defined. What you've said is completely correct with respect to standard C.