url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
https://www.denguedenguedengue.com/how-does-license-server-work/
|
code
|
How does license server work?
But what is a license server? To keep track of the licenses and users, the license server uses a centralized computer software system that gives access tokens – also known as software license keys – that allow licensed software to run on a client’s computer. No token – no access.
What is a network license server?
With the Network License method, a license server monitors the number of clients that can run the software, rather than a license being obtained for each client. The PC that manages all of the licenses is known as the license server, and the PCs that use the calculator software are known as the clients.
How do I find my Licence server?
To open Remote Desktop Licensing Manager, click Start, point to Administrative Tools, point to Remote Desktop Services, and then click Remote Desktop Licensing Manager. Right-click the license server for which you want to view the license server ID, and then click Properties. Click the Connection Method tab.
Why do I need a server license?
When a customer buys Windows Server Standard or Datacenter, they receive a server license that allows them to install the operating system on one computer. A server software license by itself doesn’t provide the license rights to allow anyone to connect to that computer, whether they work for the company or not.
What happens if Citrix license server goes down?
If the License Server goes down, both sites go into the grace period. Each site allows up to the maximum number of licenses installed. As above, the user/device licenses have no limit.
Who might use a single user license?
The authorization that grants one user the right to use a software package. It may grant the user the right to install the software on only one machine, or it may authorize installation on any number of machines as long as that same licensee is the only user.
Is my server activated?
Start by opening the Settings app and then, go to Update & Security. On the left side of the window, click or tap Activation. Then, look on the right side, and you should see the activation status of your Windows 10 computer or device.
Do I need a CAL for each server?
The general requirement is, any User or Device that accesses the server software, either directly or indirectly, requires a CAL. But you dont need purchase CAL for each user/computer adding to AD and you only need appropriate amout of CALs for your users or devices to use Active Directory legally.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00512.warc.gz
|
CC-MAIN-2022-33
| 2,471
| 16
|
https://forum.c-command.com/t/mail-macosx-sierra-import-messages-script-problem/11540
|
code
|
I have been using the Import from Apple mail script:
for long time and it worked really well,
BUT, after yesterday upgrade to macOSX Sierra I am getting errors when I try to grab e-mails which contain attachments. I get in EF “Errors” window:
“Missing Apple Mail Message Attachment” and message_number.partial.emlx
Interestingly, this seems bit unreproducible. I even grabbed message once with no problem and then, later, the same message with the problem. When I open the message which showed missing part from EF in Mail again, the attachments show with arrow as they need to be downloaded.
And I checked that in the “Manage” on the About this Mac>Storage in Mail I am downloading all the attachments.
Also, if I use smart folder and display the messages as “Conversations”, in 10.11 Mail the script grabbed messages for all the Conversation (Inbox and Outbox content), but in 10.12 it seems to grab just the top, last, Inbox message, ignoring the other ones. That is quite inconvenient.
Seems Apple has done some serious under the hood changes?
Michael, can you, please, look into this and see if the script can be updated somehow or else fixed?
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894426.63/warc/CC-MAIN-20201027170516-20201027200516-00510.warc.gz
|
CC-MAIN-2020-45
| 1,164
| 9
|
https://www.lynda.com/HitFilm-Express-tutorials/Converting-tricky-formats/647681/686741-4.html
|
code
|
In this video, learn what to do if you have awkward or old video files.
- Most modern computers can handle video files pretty effectively, but occasionally you might run into some weird formats that don't perform very well, particularly if you're working with old archival footage. The good news is there's a free bit of software on both Mac and PC which can convert pretty much anything you throw at it into something more useful. It's called Handbrake, and you can download it from the website on the screen right now. Now, it has full documentation, but in terms of just the essentials of what you need to know, here's how you use it. After downloading and starting the software, drag the troublesome video file into the Handbrake interface.
This will set it as the source. You'll then want to check out the list of presets on the right and choose one that fits your needs. Aim for something which fits your intended resolution and frame rate. If you're in the US, for example, one of the 1080p 30 presets will be a good starting point. In the Destination field, you'll then need to specify what you want the newly converted video file to be called. After that, all you need to do is start the encode by hitting the Start button. Depending on the duration and format of the video file, you'll then need to let your computer do its thing for a while, so go grab a cup of tea, and by the time you get back, you should have a shiny new video ready to go.
- Getting started with HitFilm Express
- Setting up a camera and lighting
- Making a shooting checklist
- Shooting on a green screen
- Transferring from camera to computer
- Converting video formats
- Importing videos into HitFilm
- Using essential editing techniques
- Using multiple tracks
- Making color corrections
- Working with keyframes and composite shots
- Creating titles and lower-third captions
- Exporting and sharing video
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145316.8/warc/CC-MAIN-20200220224059-20200221014059-00023.warc.gz
|
CC-MAIN-2020-10
| 1,891
| 16
|
https://forum.netgate.com/topic/86774/bog-standard-dmz-setup
|
code
|
Bog standard DMZ setup
Hi guys. I am trying to setup a bog standard DMZ configuration. My LAN is 192.168.1.1 and I want the DMZ to be 192.168.2.1. I am running VPN as well. I have pfsense and ubuntu running on a Xencenter VM. I want Ubuntu to be the DMZ. I will attach some screen shots of what I have so far. From Ubuntu I can ping 192.168.1.1, 192.168.2.1 and 192.168.2.2 but I cannot get to the internet. I appreciate your help. I know there are many DMZ posts, but there are so many that I was having trouble figuring out what would be applicable to me. I am having trouble uploading pictures, which I realize is essential to me getting help.








I'm assuming your Ubuntu machine is on an OPTx interface which would suggest you need to have an allow rule setup on OPTx interfaces.
The Lan interface by default will allow all out onto the net.
From Ubuntu I can ping 192.168.1.1, 192.168.2.1 and 192.168.2.2 but I cannot get to the internet.
In a more truer sense of a DMZ I would have a rule which blocks your Ubuntu machine and anything else on the OPTx network from contacting your lan network.
I have added a screen shot of the interface rules. For some reason I had a terrible time uploading this morning.
Are you logging your rules, and seeing whats being blocked and allowed in the fw logs?
The blocked packets should show up in the fw log, but from memory as on different machine in different location to pfsense machines ATM there is also an option in the one of the general/main system settings/config gui pages to log everything which might need ticking as well.
What IP space is your OpenVPN using? In your pfSense LAN & OPT1 details, do you have a gateway defined for either? (hint: you should have a gateway defined for pfSense LAN or OPT1). What are the interface details for your Ubuntu box's LAN interface?
dotdash last edited by
(hint: you should have a gateway defined for pfSense LAN or OPT1).
He means SHOULDN'T. I'm sure it's a fatfinger.
You don't need those rules on the WAN. What you do need are rules on the DMZ, similar to the LAN rules to allow traffic out.
He means SHOULDN'T. I'm sure it's a fatfinger.
JFC, I read that 3 times to make sure I didn't have it reversed…
What you do need are rules on the DMZ, similar to the LAN rules to allow traffic out.
I think his goal is to restrict exactly what the DMZ can access externally. I forget if you also had to allow access to the pfSense DMZ/This Firewall interface or not and I'm not in a position to try it now.
Thanks for the reply folks. Sorry for my delayed response as I have family in town. @KOM I do not have a gateway defined for either the LAN or the DMZ. I am not sure what you mean by IP space for the OpenVPN. I configured OpenVPN exactly as described in the link below.
The network configuration for the Ubuntu machine is attached. I had not looked at the firewall log, but my word is it active. I have attached a copy of the page and I am sure I have something wrong based on this.




Do you want to expose your DNS queries to an external source, namely 10.200.0.1, plus the DNS server may not know anything about your internal network setup?
Another way of looking at it, if something happened to your Ubuntu machine, logging its DNS queries could show up potential hacks to the machine.
When you say in your first post "I want Ubuntu to be the DMZ", what exactly do you mean?
If you want to expose some services to the net, as it will have a different IP address to the pfsense DMZ interface namely (192.168.2.1), would a port forward to the ubuntu machine namely 192.168.2.2 be more appropriate?
@firewalluser asks what exactly do I mean by "I want Ubuntu to be the DMZ". Perhaps this is a good question to answer since perhaps I am going about things all wrong. I would like to establish an ownCloud files sharing system on the Ubuntu machine so all my family to share pictures amongst each other. Hence, with all my reading, I determined that the way to do this was to have the Ubuntu server on its own subnet being accessible from the internet.
It doesn't really have to be that way but it is more secure. You could have it on the existing subnet and just port-forward 80 and 443. I just went through this myself with my own domain and SSL cert. I now have an HTTPS owncloud running on a VPS. But I digress…
Have you tried nuking all your existing OPT1 rules and replacing them with an Allow All just to see what's going on? Then you could add a rule that prevents access from OPT1 to LAN. Get it working loosely and then tighten it up.
And make your DNS pass rule TCP/UDP. DNS can use both.
Might also be worth bearing in mind PF behaviour in freebsd has changed from earlier versions so its worth nuking the states after making changes to the rules, ie you work with the allow anything first principle, and as you add new rules to tighten things up, make sure existing states from old rules dont still exist.
I can get the Ubuntu VM to work from LAN but it seems never from DMZ.
Please post a screencap of your current DMZ rules. This shouldn't be hard. An Allow LAN to Any rule just like the one you have on LAN should do it.
Thanks for all the suggestions. I agree that it shouldn't be that hard. For some reason it was turning into a real ordeal. I have finally, tonight, had some success. I can now access the internet from the Ubuntu VM. I am able to access 192.168.1.1 but cannot access the rest of the 192.168.1.x network, which I suppose is the intent. For some reason it wouldn't work unless I specified the Gateway to be the WAN. I have 2 gateway's as one is the VPN. The LAN is setup to have a default gateway and I think I have rule that forces everything out the VPN unless another rule is in place. I am not sure why this didn't also apply to the 192.168.2.x network. So, I tried to force it out on the VPN and the internet does not work then on the Ububtu VM.
So pardon the new question that I know will give me away as a total NOOB, but… If I want to set up ownCloud on the Ubuntu server, would it completely defeat the purpose of everything I have gone through to map a FreeNAS drive to the Ubuntu VPN to be used as cloud storage? Or is simply mapping a folder to be used for the cloud still maintaining a sound firewall setup. Thanks.


I did a little more tinkering and I thought the 2 screenshots below would help to shed some light on what is going on. The outbound rules for 192.168.2.0 are required or the internet on the VM will not work. I don't know if the 1:1 rules is required. I suppose that is why I can access 192.168.1.1 from 192.168.2.1.




I am able to access 192.168.1.1 but cannot access the rest of the 192.168.1.x network, which I suppose is the intent.
Usually unless you have changed it, 192.168.1.1 is going to be the lan gui address, if this is the case, do you really want to access the fw from your DMZ? This also ties in with your 1:1 port mapping screenshot.
On your dashboard what IP's are showing for your Interfaces? Obscure the WAN ip address.
I have deleted the 1:1 interface for 192.168.1.1 to 192.168.2.1 but I can still access 192.168.1.1 from the 192.168.2.x subnet. Why would that be?
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988923.22/warc/CC-MAIN-20210508181551-20210508211551-00238.warc.gz
|
CC-MAIN-2021-21
| 8,648
| 56
|
http://www.infoworld.com/article/2622671/collaboration-software/5-tips-for-sharepoint-2010-deployment-and-configuration.html?page=2
|
code
|
To run the tool, you have to first make sure SharePoint 2007 is upgraded to SP2. Then you navigate through an administrative command prompt to the
%COMMONPROGRAMFILES%\Microsoft Shared\Web Server Extensions\12\bin, type
stsadm.exe -o preupgradecheck, and read the results.
SharePoint 2010 tip 4: Extending the Web application for alternate access mappings
One cool feature of SharePoint 2010 is its ability to provide different URLs to access the same site or site collection. This is done through a feature called Alternate Access Mappings. AAMs are great for when you want to load balance SharePoint or make it work with reverse proxies like Forefront TMG. They're also well-suited for providing access to the same sites through different authentication methods.
However, to accomplish this, you need to perform a task called extending (or cloning) your Web application. By extending the Web application, you can provide different authentication methods through five separate zones (Default, Intranet, Internet, Extranet, and Custom) with different URLs. This is incredibly helpful if you have a site that is, for example, aimed at both intranet and extranet users, but you want to provide only HTTP access for intranet users and deploy claims-based authentication for extranet users. By extending the Web application and establishing a new zone for extranet users, you can establish unique authentication and a unique URL for those users.
SharePoint 2010 tip 5: Achieving a 1:1 site collection/content database ratio
In SharePoint 2010, you'll find interesting scalability facts about content databases and site collections. For example, although you can place multiple site collections in a single content database within SQL, the typical size recommendation per site collection is 100GB, and 200GB is the recommended maximum size for a content database. That 200GB recommendation is not a cap: Content databases can grow, but for the sake of performance, Microsoft recommends the 200GB limit. However, 200GB goes quickly, so the wise move is to create additional content databases and move site collections into them so that you have a 1:1 ratio of site collection and content database.
I hope these tips point you in the right direction as you begin your journey in working with SharePoint.
This article, "5 tips for SharePoint 2010 deployment and configuration," was originally published at InfoWorld.com. Read more of J. Peter Bruzzese's Enterprise Windows blog and follow the latest developments in Windows at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191986.44/warc/CC-MAIN-20170322212951-00630-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 2,607
| 10
|
https://github.com/samuelhopkins
|
code
|
Create your own GitHub profile
Sign up for your own profile on GitHub, the best place to host code, manage projects, and build software alongside 28 million developers.Sign up
A Markov chain generator for lyrics. Scrapes data from Genius.com with mulithread scraper
Scheduling application for University of Chicago Admissions Office
Python application that will send surf condition updates via text messages
Markov chain lyric generator app.
Python script that blacks out screen for 20 seconds every 20 minutes to protect the eyes of screen starers everywhere.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584519382.88/warc/CC-MAIN-20190124055924-20190124081924-00038.warc.gz
|
CC-MAIN-2019-04
| 560
| 7
|
http://www.coderanch.com/t/380393/java/java/NIO-MalformedInputException-German-Umlauts
|
code
|
Register / Login
Win a copy of
Clojure in Action
this week in the
Java in General
NIO MalformedInputException & German Umlauts
posted 9 years ago
I have an NIO server and wish to send messages containing german umlauts
such as �, �, �. This works fine on the local development machine but
on the internet server it throws an Excpetion:
: Input length = 1
at java.nio.charset.CoderResult.throwException(Unknown Source)
at java.nio.charset.CharsetDecoder.decode(Unknown Source)
I am using 1.5.0_06 on the server and 1.5.0_07 locally. The server is
also stationed in germany.
I am using this to get an ascii decoder:
_asciiDecoder = Charset.forName("ISO-8859-1").newDecoder();
I also tried defaultCharset() but that didn't work either.
Anyone have an ideo what I need to check?
I agree. Here's the link:
Problems with German umlauts
Problem viewing German umlauts in certain Outlook 20003
PayPal and umlauts, char encoding
soap and utf-8 encoding issue
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158609.98/warc/CC-MAIN-20160205193918-00009-ip-10-236-182-209.ec2.internal.warc.gz
|
CC-MAIN-2016-07
| 955
| 24
|
https://www.veritas.com/support/en_US/article.TECH165227
|
code
|
1 Media server attached to 2 OST devices, Running optimized Deduplication between 2 OST devices.
Final error: 0xe0000608 - An error occurred while preparing to duplicate a backup set using optimized duplication. UMI code V-79-57344-1544 provides information about how to copy the backup set without using optimized duplication.
Final error category: Backup Media Errors
For additional information regarding this error refer to link V-79-57344-1544
This Issue is resolved in Backup Exec 2010 R3 Hotfix 191248. Refer the related article section for the link to download the hotfix
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00043-ip-10-171-10-108.ec2.internal.warc.gz
|
CC-MAIN-2017-09
| 578
| 5
|
https://community.dreamfactory.com/t/sandbox-version-2-3-1-and-multiple-result-sets-from-sp/3472
|
code
|
Hi, I set up a sandbox account yesterday & notice it’s on version 2.2.1
I’ve been using version 2.3.1 locally and have a stored procedure with multiple result sets that works great locally, so perhaps this is a bug in 2.2.1 that was fixed. So my questions are:
- How to upgrade the sandbox account to 2.3.1 if possible?
- Am I correct in assuming that multiple result sets from stored procs is fixed in 2.3.1?
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315222.56/warc/CC-MAIN-20190820045314-20190820071314-00311.warc.gz
|
CC-MAIN-2019-35
| 413
| 4
|
https://www.nulltrace.org/p/about-me.html
|
code
|
Hi there! I am Himanshu Chauhan. You are welcome to my blog.
I am a "GEEK" from India. I just love computers and have an un-ending fascination about these machines. I just love them in whatever size and shape they come. They either be network routers, general purpose computers, cute little hacking boards... whatever, I just love them.
I have the hobby of programming (and programming and when I am not doing that) - reading (technical documents, manuals, Biographies, Entrepreneurship, sci-fi novel), watching The Big Bang Theory, Mr. Robot, Silicon Valley, etc.
What am I programming? Well these days, I am hooked on to the Xvisor: The new type-1 bare metal hypervisor that I am making with my friends. I am the author and maintainer of the AMD64 or x86_64 (which ever way you prefer to call it) version. I am hacking on it mostly in my free time. I am getting my hands on Python language. This is the third language after C and assembly (I am hooked to x86 and MIPS mostly) which I find fascinating. Things that I have been doing using shell script, I try to do with Python these days. It's a good tool for many things -- System admin, Routine jobs, GDB extension, Quick prototypes -- are only to name a few.
Before I got into Xvisor, Linux was something that kept me busy. In fact, my daily bread work was that until a year ago before I also started to get my hands dirty in Networking (data path) size of things. Network routers are something that stretch the boundaries of the machine as does the Xvisor. So I kind of like working on both. I love my work @ office and I love Xvisor. Can't ask for more in that :)
Drop me an email at [ hschauhan at nulltrace dot org].
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256797.20/warc/CC-MAIN-20190522103253-20190522125253-00352.warc.gz
|
CC-MAIN-2019-22
| 1,674
| 6
|
https://www.crazy8press.com/2011/09/20/fly-duckbob-fly/
|
code
|
Yes, it’s finally here! Aaron Rosenberg’s hilarious new science fiction novel, No Small Bills, is now available for sale through both Barnes and Noble and Amazon! Here’s what people are saying about it:
“If you’re looking for some wacky light reading, this book is for you. . . . Bob’s narration is the best part of this book — half Mickey Spillane and half Woody Allen. Definitely great reading for a rainy Sunday afternoon.”
“This book is really fun and funny! . . . Love the narrative voice and the flow of the story.”
“Fans of surrealist humor, Monty Python, Science Fiction, and Douglas Adams’ “Hitchhiker’s Guide” series will enjoy the adventures of DuckBob Spinowitz, a classic wise-acre everyman (who happens to have the head of a duck).”
What are you waiting for? Pick up your copy today!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233508959.20/warc/CC-MAIN-20230925083430-20230925113430-00116.warc.gz
|
CC-MAIN-2023-40
| 829
| 5
|
https://math.stackexchange.com/questions/1983501/for-every-non-square-matrix-prove-that-aat-or-and-ata-is-singular
|
code
|
For every non-square matrix prove that $AA^t$ or/and $A^tA$ is singular.
Like the title, I want to prove this and I tried to think of ways to prove it but I couldn't think of some..
I know by this answer that $AA^t$ is symmetric but I cant make the connection.
If someone has at least a hint to that It'll be great if you could write it in the comment section so I could give it a shot!
Thanks in advance.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363445.41/warc/CC-MAIN-20211208053135-20211208083135-00178.warc.gz
|
CC-MAIN-2021-49
| 405
| 5
|
http://tcdownload.org/windows-media/windows-media-error-code-on-vista.html
|
code
|
Any .AVI file. None of the before mentioned methods works. Remove WMP 11 1. HOWEVER I just solved my issue and feel compelled to report it. http://tcdownload.org/windows-media/windows-media-player-on-windows-7-64-and-32bit.html
on October 24, 2009 at 4:52 pm said: But mine is Windows 7 and using Media Player 12!!! If you're building a playlist in the rightmost library pane, you may need to push Stop in order to get the time to show up. * To start the player in Click on network and UNCHECKED UDP bullshit, apply ok and hey presto works just fine, i can now watch my kick ass arsenal team do some damage. After all this work I finally got it!
If that is not the source of the error, then you need to get your system up to date at Windows Update and then reinstall. * If your wmsetup.log contains error If that is not the source of the error: Interpreting this would require a contextual analysis of the system's setup log. * If your wmsetup.log contains error "0x8000FFFF": This is "Catastrophic Q: How do I determine what version of WMP I have? Note that, yes, you can have WMP6 installed alongside a later version of WMP.
Does anyone have any solutions? I went to Add/Remove programs, and unistalled WMP 11 and restarted the computer. happy problemsolving! For WMP, install the offline "Force Online" fix.
A: Your advapi32.dll file has been corrupted by rogue software on your box (I believe it is now version "4.71.0118.0", which is not the version it is supposed to be). I think it has been posted before. Sure, you can try it if it makes you happy, but ... https://www.vistax64.com/sound-audio/108219-vista-windows-media-player-error-code-c00d11d2.html I logged in as the new user, and I could open the network tab in the media player.") Scott says that you may be able to clean these up by using
Q: How do I install WMP10 (or newer) in unattended mode? A: See this page for how to fix this. Q: How can I detect what version of WMP a user has via my application? Type regedit into the Start Search box and hit enter. 3.
For other users, you would need to look at the setup log files to determine why update.exe is failing. * If your wmsetup.log contains error "0x8007f0da": This is "Error: Setup could No loss in settings, programs or data. hope it works for someone else. Most anything about a previous version applies to the next version(s) too. Windows Media Player 12 Questions Windows Media Player 12 is part of Windows 7.
I put that block in because if you install the Windows Media Format 11 Runtime to your system (needed by both of those applications), that will break certain Media Center recordings. Check This Out In general, you're having networking problems... Go to IE's Tools:Internet Options:Programs menu dialog and click the "Reset Web Settings..." button. (You can uncheck the "Also Reset Home Page" option on the pop-up dialog). I tried # 13, 46 and many more - but no one worked for me!
A: For Enterprise deployment, the EDP should be what you want. (If you contact product support directly, they have an EDP designed to support version 11. A: This is documented here. Get help from the community Ambassador chat Chat one on one with a fellow Xbox User who wants to help. http://tcdownload.org/windows-media/windows-media-7.html Q: I have a question about the WM Player 9 Series installer or uninstaller...
E. Here's how to do it. Q: Why am I getting a "Fatal exception error OE in VxD Logger (03)"? (logger.vxd) Q: Why am I getting a blue screen playing back content from the Internet?
Choose Upgrade The installer will find your existing XP installation and offer to repair it. sorry.. Don't pirate software. If even that doesn't work AND you are on a 32bit system installed to the C:\ drive, this even more unsupported 32bit C-drive-only fix may help.
Reply ↓ cindy on September 26, 2009 at 9:20 pm said: I have problem in playing downloaded video files above 20,000kb on window media player and also htm file on firefox. Keep the good work! Thankx. have a peek here Run the network setup wizard (icwconn1.exe) again.
God bless you!! This is on a network that has no internet access, so I can't just blindly sift through codecs and updates. You would want to search for the name of the failing INF in that log. Can anyone help?
Thanks) Reply ↓ justin tried all the above on January 19, 2010 at 11:29 am said: I have Vista 32, WMP 11..,wmv's arent playing. A: Paraphrasing Zeb: In some cases firewalls will block UDP traffic, which is required for "real-time" streaming playback. Now, click the Rip Music tab and try changing the rip audio format (in the rip settings section)to Windows Media Audio - this sometimes cures this CD rip error.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947939.52/warc/CC-MAIN-20180425174229-20180425194229-00600.warc.gz
|
CC-MAIN-2018-17
| 4,676
| 14
|
https://ccronline.sigcomm.org/2019/ccr-october-2019/five-decades-of-the-acm-special-interest-group-on-data-communications-sigcomm-a-bibliometric-perspective/
|
code
|
Waleed Iqbal, Rana Tallal Javed, Junaid Qadir, Adnan Noor Mian, Gareth Tyson, Saeed-Ul Hassan, Jon Crowcroft
The ACM Special Interest Group on Data Communications (SIG- COMM) has been a major research forum for fifty years. This com- munity has had a major impact on the history of the Internet, and therefore we argue its exploration may reveal fundamental insights into the evolution of networking technologies around the globe. Hence, on the 50t h anniversary of SIGCOMM, we take this opportu- nity to reflect upon its progress and achievements, through the lens of its various publication outlets, e.g., the SIGCOMM conference, IMC, CoNEXT, HotNets. Our analysis takes several perspectives, looking at authors, countries, institutes and papers. We explore trends in co-authorship, country-based productivity, and knowl- edge flow to and from SIGCOMM venues using bibliometric tech- niques. We hope this study will serve as a valuable resource for the computer networking community.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00439.warc.gz
|
CC-MAIN-2023-50
| 985
| 2
|
https://sparkbureau.org/membership/membership-enquiry?membership_type=Small%20Office
|
code
|
Due to high demand, we only have one small office (~10m2) available. More office space may become available in the new year.
We have several medium sized (~25m2) offices available.
Please fill out this form and we'll get back to you within a business day.
"*" indicates required fields
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499524.28/warc/CC-MAIN-20230128054815-20230128084815-00334.warc.gz
|
CC-MAIN-2023-06
| 285
| 4
|
https://www.itprosphilly.com/principal-full-stack-engineer-remote-us-east-coast/
|
code
|
Join a revolutionary Series A IoT Energy startup with 45+ employees that deploy smart devices to reduce environmental impact and raised over $20M+ as their fully remote Principal Full Stack Engineer.
Compensation: $180,000 to $225,000 + Stock options
Work Location: 100% USA East Coast-Remote
- 401k Plan
- Flexible, remote work arrangements
- Flexible Paid Time Off (PTO)
- 14 Paid Holidays
- Comprehensive and affordable health benefits
- Pet Insurance
- Generous Paid Parental Leave
- Team building and volunteering events
- Monthly internet stipend
- Unlimited professional growth potential
Important Note: You must be authorized to work in the USA without any work restrictions, now and in the future, to be considered.
The ideal Principal Full Stack Engineer candidate has extensive user interface development experience and knows how to optimize interactions and user experience.
The Principal Full Stack Engineer would have at least 8+ years of hands-on software engineering in petabyte-scale data systems.
Additional Experience Qualifications:
Ability to collaborate and debate ideas, with the outcome of finding the best solution we can quickly, fearless, aggressive, and creative in solving problems and tasks, take ownership and improve upon our project management process, ability to deliver high-quality experiences quickly and then learn from what they accomplish or fail at, to make them better.
- Round 1 = Screening call w/ IT Pros
- Round 2 = Zoom Video Interview w/ Hiring Team + CEO (2.5 hours)
- Round 3 = Decision w/ 3 References + Background Check
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00171.warc.gz
|
CC-MAIN-2022-49
| 1,571
| 21
|
http://winintro.ru/certtmpl.en/html/85e1436e-4c52-489a-93a2-6603f1abadf7.htm
|
code
|
Certificate templates are an integral part of an enterprise certification authority (CA). They are an important element of the certificate policy for an environment, which is the set of rules and formats for certificate enrollment, use, and management.
When a CA receives a request for a certificate, groups of rules and settings must be applied to that request to perform the requested function, such as certificate issuance or renewal. These rules can be simple or complex and may apply to all users or specific groups of users. Certificate templates are the sets of rules and settings that are configured on a CA to be applied against incoming certificate requests. Certificate templates also give instructions to the client on how to create and submit a valid certificate request.
Certificates based on a certificate template can only be issued by an enterprise CA. The templates are stored in Active Directory Domain Services (AD DS) for use by every CA in the forest. This allows the CA to always have access to the current standard template and ensures consistent application of the certificate policy across the forest.
Administrators of Windows Server 2008–based enterprise CAs can use a number of predefined certificate templates. For more information, see Default Certificate Templates.
Certificate templates introduced in Windows Server 2008, Windows Server 2003, and Windows 2000 have different levels of configurability. For more information, see Certificate Template Versions.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647525.11/warc/CC-MAIN-20230601010402-20230601040402-00214.warc.gz
|
CC-MAIN-2023-23
| 1,493
| 5
|
https://iphostmonitor.com/mib/oids/DOCS-IF-MIB/docsIfCmtsChannelUtUtilization.html
|
code
|
With IPHost Network Monitor you can run simple snmp requests against a Cisco device in your network.
DOCSIS If Cmts Channel Ut Utilization
The calculated and truncated utilization index for this physical upstream or downstream channel, accurate as of the most recent docsIfCmtsChannelUtilizationInterval. Upstream Channel Utilization Index: The upstream channel utilization index is expressed as a percentage of mini-slots utilized on the physical channel, regardless of burst type. For an Initial Maintenance region, the mini-slots for the complete region are considered utilized if the CMTS received an upstream burst within the region from any CM on the physical channel. For contention REQ and REQ/DATA regions, the mini-slots for a transmission opportunity within the region are considered utilized if the CMTS received an upstream burst within the opportunity from any CM on the physical channel. For all other regions, utilized mini-slots are those in which the CMTS granted bandwidth to any unicast SID on the physical channel. For an upstream interface that has multiple logical upstream channels enabled, the utilization index is a weighted sum of utilization indices for the logical channels. The weight for each utilization index is the percentage of upstream mini-slots allocated for the corresponding logical channel. Example: If 75% of bandwidth is allocated to the first logical channel and 25% to the second, and the utilization indices for each are 60 and 40, respectively, the utilization index for the upstream physical channel is (60 * 0.75) + (40 * 0.25) = 55. This figure applies to the most recent utilization interval. Downstream Channel Utilization Index: The downstream channel utilization index is a percentage expressing the ratio between bytes used to transmit data versus the total number of bytes transmitted in the raw bandwidth of the MPEG channel. As with the upstream utilization index, the calculated value represents the most recent utilization interval. Formula: Downstream utilization index = (100 * (data bytes / raw bytes)) Definitions: Data bytes: Number of bytes transmitted as data in the docsIfCmtsChannelUtilizationInterval. Identical to docsIfCmtsDownChannelCtrUsed Bytes measured over the utilization interval. Raw bandwidth: Total number of bytes available for transmitting data, not including bytes used for headers and other overhead. Raw bytes: (raw bandwidth * docsIfCmtsChannelUtilizationInterval). Identical to docsIfCmtsDownChannelCtrTotal Bytes measured over the utilization interval.
Back to DOCS-IF-MIB MIB page.
IPHost Network monitor allows you to monitor docsIfCmtsChannelUtUtilization on Cisco device via the SNMP protocol.
Download IPHost Network Monitor (500 monitors for 30 days, 50 monitors free forever) to start monitoring Cisco multiplexers right now.
Easy monitoring of docsIfCmtsChannelUtUtilization with IPHost tools
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506632.31/warc/CC-MAIN-20230924091344-20230924121344-00757.warc.gz
|
CC-MAIN-2023-40
| 2,890
| 7
|
https://sei.cmu.edu/saturn/2013/presentations/2008.cfm
|
code
|
April 29 to May 3, 2013
Software architects from around the world attended the Fourth Annual SEI SATURN Workshop, where they exchanged best practices in developing and acquiring software architectures and using them to build predictable, high-quality systems. This year the keynotes, presentations, and tutorials brought software architecture experts together to share how they successfully put SEI and other architecture technologies into practice.
Stay connected throughout the year by visiting the SATURN website- where software architecture technology users network to get the latest news, articles, and reports as well as to share best practices and lessons learned.
Debugging Software Architectures
As software architectures are used to describe larger and complex systems, it is increasingly difficult to find the cause of an error in the event of a failure. Debugging is commonly used in programming languages to effectively find the cause of a failure and locate the error to provide a fix. The same should be accomplished in software architectures to debug architecture failures.
As part of our work in debugging software architectures, we are identifying a classification of architectural defects. This provides a basis in forming a hypothesis on what has caused the defect in the architecture. In the debugging process, the chosen hypothesis is either confirmed or refuted. Once it is confirmed, a possible correction can be identified to correct the architecture.
The debugging process involves debugging at the structural level and execution level of the software architecture. Structural errors are debugged through static aspects of the architecture. Execution errors are debugged through the use of a simulator, for instance the ADeS simulator for AADL.
In this presentation, we introduce our approach to debugging software architectures and present preliminary results.
On ADLs and Tool Support for Documenting View-Based Architectural Descriptions
DistriNet is a research lab with 60+ researchers. The general domain of expertise and innovation of DistriNet is the development of advanced open and distributed software applications. The research is application driven and is conducted in close collaboration with industry. One particular class of applications we target is that of decentralized systems that are characterized by a high degree of dynamism and change in either the problem or the system's environment. Example domains of interest are manufacturing control, supply chains, inland shipping, and traffic control.
To document software architecture, we follow the approach of SEI Views and Beyond (V&B). V&B is an approach for documenting software architecture by means of a set of relevant views and adding information that applies to more than one view. Views describe (parts of) the system from different perspectives, exposing different quality attributes that are of interest for particular stakeholders.
In several projects in which we applied V&B, we found that managing and maintaining a consistent architectural documentation is a tedious task, including maintaining the mapping between views, maintaining the related view packets within each view packet, updating context diagrams, maintaining consistency w.r.t. combined views, etc.
While V&B offers a well-defined approach to organizing architectural documentation, there is a lack of support in ADLs and associated tools for documenting software architectures that comprise several, interrelated views. Existing ADL tools (e.g. AADL, ArchStudio, AcmeStudio) offer several ways to organize architectural documentation but do not support views as first-class concepts of architectural documentation. From our experience, there is a gap between the state of the art on documenting software architectures and the state of the practice in ADL tool support for documenting architecture.
We advocate that developing ADLs and tool support specifically targeted at view-based architectural descriptions is imperative. This can significantly increase the level of comfort for managing view-based architectural descriptions. As a first step, we investigate extending an existing ADL, i.e., xADL, with support for documenting a number of relations among view packets of structural views. We integrated this extension in ArchStudio and used this extended tool for documenting the architectures of a traffic control system as well as a digital newspaper publishing system. We observed that the tool significantly improves consistency management. Another interesting benefit is that the tool enables an architect to generate composed views on the fly, which was found to be useful in the interaction with stakeholders, particularly developers.
Currently, we are expanding the scope of xADL and Archstudio with support for documenting view packets and their relations across multiple views. From our experience, we put forward a number of challenges that are key to translating the existing body of knowledge on views and relations into proper tool support. These challenges include (1) selecting a set of practical views and relations, (2) formally specifying these views and relations, and (3) designing a tool that provides an intuitive user interface, while hiding the complexity that lies beneath.
Quality Attributes and Requirement Traceability
A. LeClerc, Unisys Corporation
Quality attributes are requirement that directly affect the building of application and software systems. Quality attributes in fact act as "super" requirements. It might be better to call them meta-requirements. A single quality attribute might impact hundreds of other client requirements. It would be desirable to be able to capture quality attributes in a requirements database and provide "traceability" between the QA, the requirements, the features of a solution, and the components of the eventual proposed architecture.
Unisys has developed a proprietary requirements-driven methodology called CDPro2 (Customer Driven Proposal Process). CDPro2 is both a process and a tool set to achieve traceability between the information components of requirements-driven proposals and projects. This methodology allows Unisys to capture and document requirements of an architecture, features of an architecture (including its quality attributes), components of an architecture, and the associated costs to build those architecture components.
This process/methodology is accompanied by a tool set built as on "overlay" on top of IBM's Rational Tools Suite, including the requirements management tool, Requisite Pro. Requisite Pro has been considerably extended from its basic requirements mission to become a generalized information manager. The customization allows for the capture of all sorts of information related to requirements (including quality attributes) and provides for generalized traceability between all information data types.
As a result, it allows the development team to use the tool set to capture all architecture information. This information can then be traced from the requirements to the quality attributes to the architecture components and finally to their development costs. Thus, extensive information traceability is provided. This has been particularly useful for large outsourcing engagements.
The presentation will describe the overall process and illustrate the use of the tool set but not in great depth.
Some Perspectives in Teaching Software Architecture
This report talks about the experiences that the authors faced in teaching software architecture to senior undergraduate and post-graduate students at IIT Kanpur over the last five years. The problem is one of teaching design and architecture to a community with a background only in programming (programming in the small)—a situation we sometimes face in the induction and training programs in the industry as well. We catalog the approaches that worked and point out some of the problems in teaching in a classroom setting, a domain which is perhaps best learned through an apprenticeship.
Software Architecture in an Integrated Engineering Methodology
J. D. Baker, BAE Systems
Fitting software architecture into the engineering process becomes a challenge when you are developing complex systems. What are the inputs? Where do they come from? How do I know that what the other disciplines are creating will meet my needs? How do I know I'm creating useful work products? Are they being produced at the right time? Recognizing this complexity, BAE Systems has developed the Integrated Engineering Methodology (IEM), a model-based, end-to-end methodology that seeks to ensure that only the products that are needed are developed and that development occurs at the right time. How do you do all that and maintain the organization at CMMI Level 5? This presentation describes the IEM, highlights the software architecture, and describes its relationship to the other elements of the methodology.
Architecture Empowerment - A Quality Attribute of Software Architecture Realms to Build Empowered Organizations
I. Eldo, Philips Healthcare
It's a fact that organizations which are empowering teams and individuals are efficient and successful. At the core, empowerment requires two key elements: 1) effective knowledge at every participant's disposal, so that they can make informed decisions in every step they take and 2) process/structure where everyone/teams can take and own decisions in their job scope. This will equally benefit the organization and individuals as everyone will be leveraging their talents. Empowerment is not easy to achieve, as there is a fine line between empowerment and slipping into chaos, so organizations turn more towards a central command control model. An effective implementation will depend on a deep understanding of every job circumstance and understanding what empowerment means in that situation; because of this, organizations struggle to empower.
In the software engineering realm, the lives of organizations and individuals are centered on basic tenets of software development: requirements, architecture, design, coding, testing, implementation, and support. Given this fact, in order to empower, there has to be deep knowledge of the problems (requirements) and solutions (architecture/designs) across the organization, so that everyone can make the right decisions in their jobs. And also this needs software methodologies, which promote ownership at all levels of organization, e.g., features, subsystems, etc. Architecture and architecture process are the keys to achieving this. I would like to call architecture (the end result, the knowledge) and architecture process (the architecting and designing process) the architecture realm.
Architecture realm is the glue that holds everything together—the system, participants, and the process steps. Starting from the requirement analysis to maintenance, every phase greatly depends on it, some of them more than the others. For example, 1) everyone needs knowledge of the architecture to make the right decisions; it could be spectators (someone who just hears about your project), stakeholders (marketing, sales, management, development, test, implementation, support) to potential customers (curious spectators) and customers. 2) To leverage great designers and developers, the architecture and architectural process has to be empowering and should give everyone a foundation to build on, using their ideas and skills. It also should allow teams to own features and areas. 3) To have quality testing, testers need to understand the requirements and the solution for requirements, which lies in architecture and design. 4) Customers need to understand how a product's architecture will fit into their enterprises. We can have lot of examples like these. All these, point out the need to empower the organization architecturally.
Traditionally, architecture realm is measured in terms of "ility"s (flexibility, availability, scalability, extensibility…etc.). These attributes are targeting qualities of the end solution, i.e., the architecture. But an important aspect of architecture realm, which does not get enough attention, is its contribution to building successful organizations. But generally, it's much more obvious when the architecture realm stands in the way of making an organization successful. As a simple example, when you think about scaling a team, if the architecture is such a way that, it needs central command and control for every decision, it's going to be very difficult. At the same time, you need to also make sure everyone is marching to one beat, building modules that fit into the architecture, using the framework elements of the system and using their brains to design and develop features. In order to be successful in this, when the architecture realm is put up, it needs to consider all these facts.
So architecture empowerment is about having an architecture realm (architecture and architecture process), which will help everyone involved to gain the knowledge they need to perform their duties and provide a framework which permits teams and individuals to comprehend, own, influence, and execute things they are responsible for effectively, thereby benefiting both the organization and personal lives of the employees.
If achieved, following are some key benefits to the organization from architecture empowerment:
Following are a few basic tenets to achieving architecture empowerment in an organization:
In the presentation, I will expand on these concepts and explain how and why I see these are going to empower organizations and make them more successful.
Challenges and Observations of Applying the SEI ATAM to a Software Testing Automation Solution
R. Arakaki, Instituto de Pesquisas Tecnológicas de São Paulo
F. Enobi, Instituto de Pesquisas Tecnológicas de São Paulo
The automated testing solution was implemented to speed up the software testing process in projects that bring very complex schedule, cost, and quality tradeoffs. The business requirements established for acquiring the testing solution automation had very critical nonfunctional requirements necessary to achieve successful results. Evaluating and measuring the nonfunctional requirements became very critical to evaluating risks, non-risks, tradeoffs, and metrics and how each business requirement could be affected. The ATAM was chosen to create a link between business and nonfunctional requirements through a very clean scenario description language.
The presentation will focus on the process and give samples of the following aspects of the project:
Lessons Learned from Deployment and Production Use of Architects' Workbench - An Architectural Thinking and Modeling Tool
D. Kimelman, IBM J. Watson Research Center
Information technology (IT) architects know how hard it is to collect architectural information in an engagement and keep it all clear and organized in their minds. Transforming that information into the models of a viable architecture and keeping associated work products consistent and up-to-date is an even greater challenge. Despite this, model-centric architectural methods are not as widely adopted or as closely followed as they could be, partly due to a lack of appropriate tools. Architects' Workbench (AWB) is prototype technology that addresses these problems and supports the creative process of architectural thinking and modeling.
Reconstructing the Architecture Model for a Sustainable Software System
Pia Stoll, ABB Corporate Research
Sustainable software architecture, which has evolved over more than 10 years and is to live and change for at least another decade, is very difficult to capture in an architecture model. The architecture is often a mixture of old and new tactics, and the system use cases which were once valid no longer capture the essence of all of the system's functionality and business goals. The case study of the reconstruction of an architectural model for a sustainable architecture to be presented dealt with a sustainable software architecture which had grown out of control. The original architects had left the development organization, and the new architects did not have full control of all parts of the sustainable architecture. In an attempt to gain back the control of the architecture, the goal of the development organization's architecture team was to document the architecture according to a model so that the architecture could be communicated among its stakeholders. The team started from the SEI books Documenting Software Architecture: Views and Beyond and Software Architecture in Practice. The team vision was to capture all domain-specific issues as trends and experiences, quality-attribute-specific issues, and business goal issues, which influence the architecture at the enterprise, system, and software levels in one model. The effects of changing business goals and software quality attributes on system architecture and software design should be made visible in the model. By making the relationships visible, the architects would be able to see what effect a changing business goal could have on the architecture or even predict how a shift in technology would affect the system and software architecture. The model would then serve as a decision guiding tool and be used in an active fashion instead of merely being a blueprint of the software architecture construction of today. The vision of documenting the different architecture levels in one model was more complex to realize than expected. What the case study ended up in was a conflict between the common approach of dividing the architecture into different views and the need of sustainable systems to accommodate changes in business goals, technology environment, and enterprise constructions affecting the architecture in one adaptive architecture model. This presentation aims at opening up the discussion on how to document continuously changing architecture at the enterprise, system, and software levels in one and the same model.
Evaluating Distributed Systems Architectures for Fault-Tolerant Applications
A large body of experience has been developed within the telecommunications industry with regard to fault-tolerant distributed systems architecture. This presentation focuses on key topics to consider in evaluating a proposed architecture for use in asynchronous, event-driven applications whose system quality attributes include stringent requirements for availability, reliability, and evolvability. A representative list of such topics includes - The processing model - Interprocess Communication - Redundancy Model - Fault Management and Recovery - Graceful Degradation Under Load - Operational Management and Maintenance - System Debugging Environment Architecture and design patterns derived from best practices emerging from the telecommunications industry will be discussed in order to provide additional insight into proven architecture and design practices being used in deployed fault-tolerant commercial systems. In addition, there will be discussion about how these topics and patterns can be applied within the context of the SEI Architecture Tradeoff Analysis Method (ATAM) of software architecture evaluation.
Current SAT Work in Architecture Evolution
Architecture evolution is the process of designing an architecture to meet today's and tomorrow's business goals, while maximizing expected value, in the face of uncertainty. Architecture evolution therefore has two foundations: 1) architecture design, which allows us to reason about the quality attribute consequences of design decisions with respect to trajectories of evolutionary steps and 2) software engineering economics, which looks at the consequences of design decisions as investments and gives us techniques to reason about the value of such investments given future uncertainty. In this talk, I will sketch our approaches to both aspects of evolution.
Putting Software Architecture in Its Place - Fitting Software Architecture into the Enterprise Technology Landscape
Eoin Woods, Barclays Global Investors
As you navigate a software technology-oriented organization, as either an end user or a vendor, you often encounter many quite senior people with the word "architect" in their job title. Software architects, enterprise architects, data architects, systems architects, solutions architects, infrastructure architects…the list goes on and on. You quickly realize that these people can't all be doing the same job, and this realization is reinforced as you meet the people concerned and often find that they have quite different skills, responsibilities, and interests.
In this talk, I hope to shed some light on this confusing landscape by sharing my thoughts on the fundamental types of architectural activity that are found in the modern enterprise. I will identify the main types of architects that you encounter in different organizations in terms of their responsibilities, the tasks they undertake, the tools and techniques they are likely to find useful in their work, and the way their roles typically relate to one another. Identifying those things should make the situation a little clearer and improve communication between practitioners and researchers, as we aim to refine and improve the state of software architecture practice.software architecture practice
Download this presentation now.
Realizing the Business Value of IT: An Approach for Architecture Evaluation
Opal Perry, Wells Fargo & Company
This presentation will discuss our efforts to extend aspects of SEI architectural evaluation methods within a division of Wells Fargo and Company that has recently deployed a production system using a service-oriented architecture (SOA) approach. Our focus was on developing a mechanism to articulate critical business processes as the context within which quality attribute requirements could be concretely defined and the supporting architecture evaluated. In leveraging and extending the SEI methods, we sought to ensure that evaluation proceedings and results provided practical and immediate meaning to business stakeholders in terms of measuring and realizing true business value. The concrete articulation of critical business processes and their associated quality attribute requirements represents a significant step forward in our environment as a starting point for measuring availability and performance in the context of true business value instead of simply technical uptime or response time. In the past, we might have been satisfied that a component was up 99.9% of the time or performed well within its service level, but the business could view it as a failure because some other component, which was needed to complete a critical business process, was unavailable. For this reason, the focus on the critical business processes has influenced our approach to both discovering and documenting quality attribute requirements as well as evaluating the architecture designed to achieve them. We will describe the key aspects of our approach and how we leveraged and extended SEI techniques such as the Quality Attribute Workshop (QAW) and Architecture Tradeoff Analysis Method (ATAM) to create a practical approach for evaluating the architecture's ability to meet the essential quality attribute requirements that would provide the most business value within the context of the business capability roadmap. This customized approach was necessary in our environment due to the size of the system being evaluated, time, and resource constraints, as well as cultural realities. Additionally, we will discuss how we organized stakeholder participation in prework and the approach taken for conducting sessions in early 2008. We will share our observations and plans for future activities and discuss the challenges of extending architecture evaluation methods within a culture not previously familiar with scenario-based methods as well as the challenges associated with bringing business and technical stakeholders together to discuss expectations and issues. In offering this presentation, we hope to contribute to an active dialogue on architecture evaluation methods in the software development community, with a focus on the realization of true business value.
Inexpensive ATAM-Peer Review Detects and Fixes Architecture Problems Early
H. Forstrom, ITT Corporation
ITT has pioneered a procedure for adapting the SEI Architecture Tradeoff Analysis Method (ATAM) reviews into incremental peer reviews. The ATAM-Peer Review leverages the discipline and benefits of the standard ATAM Review in a lite review that is performed earlier in the process yet still reaps the major benefits of a full-up ATAM. The ATAM-Peer Review has identified architecture weaknesses very early in the life cycle when fixing them was trivial. This review includes a mini stakeholder analysis and an SEI Quality Attribute Workshop. By incorporating these into our project launch and performing ATAM-Peer Review training as just-in-time training, ITT has reduced the impact of this review to four hours. In piloting this method, ITT succeeded in finding over eight weaknesses that had been overlooked by the software architect and designers. These weaknesses were fixed, and the resultant architecture was successfully deployed. Specifically, a missed modifiability requirement would have limited the system's success had it not been fixed.
Architecture Curve, New Formatted SEI ATAM Report Shaped in a Single Graph
Heeran Youn, Samsung Electronics
As the size and complexity of software in embedded systems grow exponentially, software architecture becomes one of the key success factors in the Consumer Electronics (CE) industry. Our organization, Samsung Electronics, has applied the SEI Architecture Tradeoff Analysis Method (ATAM) and SEI Quality Attribute Workshop (QAW) for better software architecture for years. But, in the real competitive marketplace, the ATAM and QAW are relatively expensive tasks to perform. We tailored them in a light way and produced an additional ATAM report shaped in a single graph to figure out ATAM results at a glance comparing relatively long ATAM reports in plain text. This presentation will
Applying SEI Architecture Tradeoff Analysis Method (ATAM) as Part of Formal Software Architecture Review
C. Byrnes, MITRE
Ioannis Kyratzoglou, MITRE
In preparation for a customer's Software System Critical Design Review (CDR) we concluded that an assessment approach based on a hybrid version of the SEI Architecture Tradeoff Analysis Method (ATAM) would be a good approach for an assessment of this software architecture. This paper will provide ideas on how to apply the ATAM within the context of a formal CDR of a large-scale complex software system.
Identifying and Documenting Primary Concerns in Industrial Software Systems
Pia Stoll, ABB Corporate Research
Roland Weiss, ABB Corporate Research
ABB business relies on industrial software systems in all divisions. Although the domains differ (power, automation, robotics), these systems share certain characteristics, both in functionality and in quality attributes. The sustainable software systems are tightly coupled with hardware systems, have to provide high reliability, are split into engineering and operations parts, and typically live over a long period of time. Maintaining and extending such systems pose an interesting challenge, as they include responding to changes in business goals, the technical environment, stakeholders' concerns, and the organization. The presentation deals with the experiences of identifying and prioritizing the primary concerns for two existing software systems within ABB business units. This covers the gathering of use cases and quality attribute scenarios for the existing systems and for their planned extensions. The first project extended the remote interface of a Robotic software application with new functionality and requirements on integration with higher and lower level systems. The latter was documented with prioritized deployment scenarios. The second project concerned a product line approach for three gauge systems. The software engineers collected use cases from a set of workshops with the key stakeholders and completed the identification of primary concerns in an SEI Quality Attribute Workshop. Commonalities and variation points were extracted from the use cases. We made the following observations during the first project: * ABB's global business structure generally requires a distributed approach to gathering use cases and quality attribute scenarios, mandating an effecting strategy for running these distributed interviews (both in location and time) and merging the information. * Combining use cases and quality attribute scenarios provides an excellent methodology for capturing system characteristics and for discussing the system's primary concerns with the different stakeholders. * Some system characteristics were not covered by use cases and quality attribute scenarios. Therefore, we added project and domain-specific mechanisms to get a complete picture of the system for making sound architecture and design decisions. The second project made the following observations: * Stakeholders voted with a specific mind-set in the QAW. Instead of voting on the legacy primary scenarios, e.g., "Implement same performance as today," they voted on what new functionality they considered to be the most important for the next-generation products based on the common platform. * The QAW did not cover all primary concerns with positive impact on the prioritized business goal. Therefore, the ABB-developed IF method was used to prioritize these concerns based on use cases, QAW, and interviews. * The identification of commonalities and variation points according to the SEI methodology simplified the first sketch of the architecture. The result of these activities was the identification of a methodology to drive system development projects for industrial software systems. The application of use cases and QAWs enables the deriving of system architectures and service interfaces. Augmenting this approach with the IF method allows identifying the systems' primary concerns and prioritizing them inline with the business goal. Finally, idiosyncrasies of the application domain have to be taken care of with specific techniques.
Current SEI SAT Initiative Technology Investigations
One of the axioms of the SEI's Software Architecture Technology (SAT) Initiative is that quality attribute requirements such as those for security, performance, modifiability, usability, and so forth have a dominant influence on a software architecture. Many people are familiar with our methods such as the SEI Architecture Tradeoff Analysis Method (ATAM) and SEI Quality Attribute Workshop (QAW) and would like to know more about some of the current research of the SAT Initiative. In this talk, we will discuss two current research projects. First, we will provide a brief overview of our investigation into service-oriented architectures. Secondly, we will focus on SEI ArchE, which is an expert system that makes the design process more transparent and helps software architects to make appropriate decisions. Currently, ArchE can reason about modifiability and real-time performance. We have recently completed an interface that will allow collaborators to add additional quality attribute knowledge to enhance the capability of ArchE.
Defining Composite Critical Scenarios for the Development of Large-Scale System Architecture Using an SEI ADD-Based Framework
Aldo Dagnino, ABB Corporate Research
Qingfneg He, ABB Corporate Research
Shakeel Mahate, ABB Corporate Research
This presentation will discuss how SEI Attribute-Driven Design (ADD) was employed to develop a framework that was employed as a basis to develop the software architecture of a complex large-scale control system in a multinational organization. The emphasis of this presentation will be on describing details of the process that was followed to define composite critical scenarios that were employed to define the software architecture. The project was quite challenging as it had several unique characteristics. First, the product development unit is geographically dispersed, and for this reason the business managers, architects, and development team were not located in the same geographic region. Second, there was an obvious disagreement among the stakeholders in the business unit regarding business goals, priorities, functionality, and software quality attributes. Third, the business unit already had several competing products that were maintained using different noncompatible technologies, and therefore members of each product group had strong biases towards their own technology. The authors will discuss how the above challenges were addressed to create a critical scenario framework that was agreed upon by all parties. This framework extended the ADD methodology by clustering ADD scenarios into themes that described the system-critical scenarios. The presentation will describe how the organization's business goals were defined and collected in geographically distributed workshops. Due to the large divergence in opinions, a voting mechanism was employed to define and prioritize the business goals. Market requirements were collected to define the primary functionality of the system. Details will be provided on the method used to collect and document these requirements. Using the set of prioritized business drivers, the software qualities of the large-scale system were defined. Using the software qualities and also the market requirements, the system's critical scenarios were defined. These critical scenarios are described by common functionality themes defined by the market requirements, and a set of nonfunctional requirements defined by the software qualities. To make the system-critical scenarios useful, nonfunctional requirements need to be quantified. As the same nonfunctional requirement can be employed in several critical scenarios, the functionality theme provides a context under which the nonfunctional requirements are given a value. This presentation will discuss in detail this process and provide examples for the audience. Another aspect that will be discussed during the presentation is related to the level of granularity of the requirements needed to define the system's architectural critical scenarios. While the requirements associated with the system functionality were defined at a higher level of granularity, the nonfunctional requirements associated with the software qualities were defined at a low level of granularity. An explanation and examples will be provided during this presentation.
On Software Architecture, Agility, Cost and Value
For many proponents of an agile approach to software development, software architecture is often seen as BUFD = Big Up-Front Design, and therefore as pure evil. But for novel, large system development, a decent architecture is not likely to simply emerge out of weekly refactorings, and we've witnessed such projects "hit a wall," like marathon runners, after a few months of agile euphoria. They tried to do the right thing from an agile perspective: deliver value to the end user at each iteration. But we notice that they often confuse cost with value, and they have decided that software architecture has no value whatsoever. By clarifying the concepts of value and being able to attribute some value to architectural design and implementation, we would allow agile projects to be reconciled with BUFD and exploit the richness of the SEI Cost Benefit Analysis Method (CBAM).
April 29 – May 3, 2013
Get the latest SATURN news, important dates, and announcements on the SATURN Network blog, sign up for our email updates, follow us on Twitter (@SATURN_News, #SATURN2013), and join the SATURN LinkedIn Group.
Phone: +1 412-268-5800
Toll Free (within the USA): +1 888-201-4479
FAX: +1 412-268-6257
Please tell us what you
think with this short
(< 5 minute) survey.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824225.41/warc/CC-MAIN-20171020135519-20171020155519-00898.warc.gz
|
CC-MAIN-2017-43
| 36,126
| 91
|
http://superuser.com/questions/439591/why-doesnt-redirect-work-for-scp-password-input
|
code
|
I want to copy from remote box with
scp, while the fact that each time I have to re-enter password is annoying. So I store my password in plain text and expect the following code should work
scp -Pport_num username@hostname:path_to_file local_path < passwd
After googling, I know this target can be achieved with the help of
expect but I can't figure out why input redirection will fail.
Thanks and best regards.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824319.59/warc/CC-MAIN-20160723071024-00120-ip-10-185-27-174.ec2.internal.warc.gz
|
CC-MAIN-2016-30
| 412
| 6
|
http://www.perfectfind.co.uk/contact-me/advertise/
|
code
|
Paid advertising on Perfect Find
If you are interested in promoting your business with a banner please contact me using the comments box below.
Your form message has been successfully sent.
You have entered the following data:
Please correct your input in the following fields:
Error while sending the form. Please try again later.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637903638.13/warc/CC-MAIN-20141030025823-00159-ip-10-16-133-185.ec2.internal.warc.gz
|
CC-MAIN-2014-42
| 331
| 6
|
https://answers.sap.com/questions/680959/sap-cc-instal-custom-reports-issue.html
|
code
|
when I try to install from IA the standard reports I get the following error:
Folder name provided by user: \\WPB_SQL\c$\Program Files\Microsoft SQL Server\MSRS12.CRCCSAP01\Reporting Services
Install and setup WicomTranslator and translator.txt
Error occurred while copy from
C:\SAP\ContactCenter\VU\CR_Prod_Standard_Reports\bin\WicomTranslator.dll to \\WPB_SQL\c$\Program Files\Microsoft SQL Server\MSRS12.CRCCSAP01\Reporting Services\ReportServer\bin\WicomTranslator.dll ==>> Permission denied
Error occurred while copy C:\Program Files\SAP\ContactCenter\Translator.txt. Permission denied
The reporting services is configured by the guide.
Any help or advice would be most gratefully received. I've been banging my head against this for too many hours now !!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100575.30/warc/CC-MAIN-20231206000253-20231206030253-00421.warc.gz
|
CC-MAIN-2023-50
| 760
| 8
|
https://searchenterprisedesktop.techtarget.com/tip/Using-third-party-technologies-with-Microsofts-NAP
|
code
|
NAP is more than just a Microsoft technology -- 87 partners are integrating their software into NAP's framework in hopes of further extending security enforcement protections to custom configurations.
Microsoft designed multiple points of extensibility into NAP's client and server enforcement architectures. This allows individual application vendors to supply and support their own mechanisms for enforcement, authentication and identity management; verification of compliance; and remediation of noncompliant clients.
Since these activities are separate in NAP's management consoles, third-party technologies can be added as an organization sees fit.
Third-party technologies for enforcement
Microsoft's options for policy enforcement require the use of Microsoft technologies. For example, Dynamic Host Configuration Protocol enforcement requires Microsoft's DHCP server, while virtual private network (VPN) enforcement requires an Internet Security and Acceleration or Routing and Remote Access server.
In this case, extensibility enables organizations with technologies such as alternate VPNs or switch port authentication infrastructures to plug directly into NAP. Organizations that need special protection for wireless networks and those that want to add pervasive access support, like Microsoft's new DirectAccess capability in Windows Server 2008 R2, can benefit.
Authentication and identity management
Advanced technologies in the enforcement mechanisms enable rich support for authentication and identity management. Users and computers can be positively verified against those allowed in the infrastructure. Permissions to access discrete services can be set at extremely granular levels based on user ID, role, location and other contextual elements.
In addition, user identities can be mapped to linked assets. Tighter links between individual users, their assets and their approved levels of connectivity are increasingly important as more mobile users connect to LAN resources.
Verification of compliance at the client layer
Security software vendors have augmented their client applications to include enforcement components.
For example, consider a typical anti-malware application that an organization has been running for a while. The organization would prefer to keep the existing infrastructure setup and simply add compliance-verification components.
Such an organization could take advantage of the NAP awareness that many enterprise-focused software companies have added to their application infrastructures.
Remediation of noncompliant clients
A NAP infrastructure that kicks out noncompliant clients is only partially useful. You also need automated systems to remediate noncompliant clients relocated to special networks.
In addition, you need extremely precise support to determine what to do with these noncompliant clients, since many types of clients may attempt to connect to an environment. For example, while a corporate asset can be remediated on its first connection within the LAN, a user's home computer requires a different level of security when connecting via a VPN.
Finding the right remediation system that aligns with your security requirements as well as your existing client security setup is critical for a successful NAP deployment. In general, many large organizations will require more from NAP than the native components.
Not only must you find the best add-ons for your organization, but you must also recognize that enforcement mechanisms such as NAP are necessary in today's enterprise environments.
Organizations that don't incorporate an enforcement component are merely hoping or wishing that their servers and workstations remain compliant with security mandates.
About the author
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505730.14/warc/CC-MAIN-20200401100029-20200401130029-00541.warc.gz
|
CC-MAIN-2020-16
| 3,741
| 20
|
http://thelazywebdev.co.uk/2019/07/
|
code
|
Yandex, the Russia-based search engine, has a handful of free-to-use tools, if you should ever feel the need, including a structured data validator and sitemap checker. You can never have too many tools… Here is a little list of some of them:
I have never used a programming framework seemingly designed to frustrate the development process as much as Magento 2. You have to constantly clear caches and static files to see your modifications, even when in developer mode.
Having recently started work on learning the ropes with Magento 2, and finding all sorts of oddities, incongruities and banging-head-on-desk-ities, I decided to take a look at the blog of Alan Storm at alanstorm.com.
Here is a quick, albeit very hacky solution to allow you to add hreflang tags to a page or item via the XML custom layout box in the admin.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141743438.76/warc/CC-MAIN-20201204193220-20201204223220-00079.warc.gz
|
CC-MAIN-2020-50
| 830
| 4
|
https://www.programmableweb.com/api/jsonfiddleme-rest-api-v100/comments
|
code
|
The jsonfiddle.me API allows users to store, access, and edit JSON files online to test their applications. This API works with any device or stack and is CORS (Cross Origin Resource Sharing) enabled. Once a user hosts their JSON data, they are provided with an access code that allows them to update it.
Sixteen APIs have been added to the ProgrammableWeb directory in categories including Big Data, Feedback, eCommerce and Marketing. Today's highlights include several APIs for SuperAwesome children's digital network and marketing services. Here's a rundown of the latest additions.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00662.warc.gz
|
CC-MAIN-2021-43
| 585
| 2
|
https://cheesewolf.com/portfolio/demolab/
|
code
|
Demolab runs with a very unique business concept, but this uniqueness wasn’t represented visually in their online and offline communication.
So they reached out to me to help them find a new visual identity.
They are a technology-showcase company that is at the forefront of innovative tech in Denmark. Demolab brings experimental gadgets, supermaterials and prototypes of soon to be consumer goods to a tangible roadshow experience to their clients. They do not just show the products, but they share the stories and ideas how the companies created their gadgets, to further inspire and inform.
It was clear from our first conversation that Demolab is not ‘just another tech company’, therefore we didn’t want to wander off into the world of countless blue-white tech identities. With this identity, we wanted to show the connection of the products, their stories and how Demolab actually chooses to showcase them to their own clients. The colours represent an organic feel as technological advancements are a part of our everyday lives while remaining mysterious until they become widely available.
Now their new identity is reflected on their contracts, in printouts, in their SoMe and other online uses. The identity utilizes a new logo which shows a modern cohesiveness that will champion the brand ongoing.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107869933.16/warc/CC-MAIN-20201020050920-20201020080920-00271.warc.gz
|
CC-MAIN-2020-45
| 1,320
| 5
|
https://ask.libreoffice.org/t/cannot-start-any-libreoffice-application-on-my-pc/77577
|
code
|
I’ve been unable to run any LibreOffice (LO) component (mostly calc and writer) on my primary Windows 10 PC for the last 6 months. (It used to work just fine.) Attempts to start any LO app results in an Application Failure;
Faulting application name: soffice.bin, version: 184.108.40.206, time stamp: 0x621d53ef
Faulting module name: cppu3.dll, version: 220.127.116.11, time stamp: 0x621d22b4
Exception code: 0xc0000005
Fault offset: 0x0000000000020e3e
I’ve been working on this problem off and on now for 6 months. Here is a summary of the fixes I’ve tried;
Downgrading to previous versions of LibreOffice (LO) that once did work on this PC.
Upgraded to newer versions of LO released since the problem appeared.
Ran the install/repair option on each LO version tested.
Upgraded LO from 32-bit to 64-bit.
Copied soffice.exe, soffice.bin and cppu3.dll files from another PC where LO does work. (Matched LO versions)
Attempted to start application in safe mode.
Ran Windows repair utilities. Rebooted more times than I can count.
Deleted the personal LO profiles in %APPDATA%\libreoffice\4\user and rebuilt.
I’ve probably tried a few other things but, in the end, nothing works. Every fix attempt results in the same error in module cppu3.dll. LibreOffice is running fine on my other PCs, but not on the one where I do most of my work.
Any idea what is wrong? Any ideas on how to resolve this issue??
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103943339.53/warc/CC-MAIN-20220701155803-20220701185803-00755.warc.gz
|
CC-MAIN-2022-27
| 1,406
| 16
|
https://www.blackhatworld.com/seo/cloaking-a-picture-from-facial-recognition.907313/
|
code
|
I want to use open-source labeled for reuse profile pictures as fake personas for some of my blogs and websites. The issue is that anyone with half a brain can right click the image and search google for it and destroy my credibility when it comes up in google images under stock photo. I've tried various ways to change the picture but google is so good at recognizing them. Is there any way around this? Thanks!
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814833.62/warc/CC-MAIN-20180223194145-20180223214145-00302.warc.gz
|
CC-MAIN-2018-09
| 413
| 1
|
https://leanpub.com/immutable-infrastructure-with-netflixoss
|
code
|
Immutable Infrastructure With Netflix OSS
Last updated on 2015-07-02
About the Book
*** UPDATE 2016-03-25 ***
Asgard was a core focus in the original text and has since officially entered maintenance status with the Netflix OSS team. This text is receiving an overhaul and moving to a Spinnaker-focused implementation.
This book gives concrete examples for transtitioning to (or building) immutable infrastructure with the help of Netflix Open Source Software. You will learn about core software available from Netflix OSS, basic configurations, and how to quickly get up and running in AWS using these tools.Chapters (Tentative):
- Immutable Infrastructure - Summary information and introduction to concepts supporting the goals of the text
- Netflix Open Source Software - Introduction to Netflix OSS and core projects that are necessary for immutable infrastructure in AWS
- AMI generation - Review 'baking' and tools for doing so
- Getting started with Asgard - Introduction to Asgard
- Transitioning to Netflix OSS - Concepts and considerations before and during a transition to immutable infrastructure with Netflix OSS
- Build AMIs - Infrastructure as code and possibilities for generating AMIs given an already existent project
- Asgard usage - Moving an existing project into AWS with Netflix OSS and Asgard
- Atlas - Deployment and general monitoring with Atlas
- Simian Army - Useful Simians for your infrastructure
- Additional projects and next steps - Netflix OSS solutions that are beyond the scope of this text, but very much worth your investigation
The Leanpub 45-day 100% Happiness Guarantee
Within 45 days of purchase you can get a 100% refund on any Leanpub purchase, in two clicks.
See full terms...
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517682.16/warc/CC-MAIN-20190418141430-20190418163430-00202.warc.gz
|
CC-MAIN-2019-18
| 1,721
| 19
|
https://community.amd.com/message/2806129?commentID=2806129
|
code
|
Well, Yesterday I was kind of over moon,
Having thought I had resolved my DDR4 Memory issues.
I noticed that the BIOS, Task Manager and HW-Info both show a different memory speed than that from Ryzen Master.
So in fact Ryzen Master is mis-reporting the speed of the memory.
Has anyone else noticed the same kind of issue?
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107872686.18/warc/CC-MAIN-20201020105000-20201020135000-00613.warc.gz
|
CC-MAIN-2020-45
| 321
| 5
|
https://www.dk.freelancer.com/projects/furniture-manufacturer-needs-filemaker/
|
code
|
We are a wooden furniture manufacturer and we need a FILEMAKER database programmer to complete or replace our existing database. The database does the following:
Inventory of parts, assemblies, lumber, and finished furniture.
Time tracking of the various operations (tasks).
Scheduling of production sequence.
The database is done and it works, but it still needs some cleaning up and alterations. The work is in depth and will require a programmer with Filemaker experience. The initial job that we are offering is specifically to get the inventory system working. Right now the system is having a hard time accounting for parts once they are used in an assembly. I know what needs to be done, I just don't have the filemaker skills.
I expect this job to take 6-8 hours of research and discussion/emailing each other to understand the database as it is, then 12 hours of programming. There will definitely be additional programming beyond this initial project if all goes well.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655554.2/warc/CC-MAIN-20191014223147-20191015010647-00204.warc.gz
|
CC-MAIN-2019-43
| 978
| 6
|
https://valuesofthewise.com/docs/viewtopic.php?tag=modern-software-development-practices-86e241
|
code
|
These various aspects are used to create a workflow pipeline, a sequence of steps that when followed produce high quality software deliverables. Bitbuckets CI/CD pipelines can make deployments as easy as clicking a button. The first PC and PC software companies gain traction. Software development and IT operations teams are coming together for faster business results. Jira and Trello are used to manage track and organize these task lists. Modern Software Practices Expertise. Functional languages can provide these higher quality offerings with features that lower the cognitive overhead of developers interacting and maintaining a code base. A formal property specification is provided for the algorithms implementation, and methods like static analysis can be used to prove the correctness of the implementation. The following discussion is not a comprehensive guide, but a overview of the most recognized techniques. You will build and continually refine a fully functional full-stack web application as we progress through the modules in this course. Discussions like maybe Object Oriented Programming was a mistake. With the modernization, the growing software development in USA seems to serve as a blessing as it enhances … Being able to identify and discuss technical software development topics can improve your toolset in a software development organization. Lovelace was a mathematician and colleague to Charles Babbage. CI/CD puts in place guardrails that allow developers to push new code and features that then automatically deploy to production environments. The serverless platform will then expose this code function on a URL that can be accessed to utilize the function. Design documents from the concept phase are broken down into actionable tasks. When applied to software these restrictions can feel draconian and counterproductive. Collaboration is a critical element of all of our projects, so we begin each engagement with a workshop with the key members of your team, so we can develop an in-depth understanding of your unique business requirements. Careers Object oriented design takes foothold. Schedule a FREE 15-minute consultation . Design documents from the Concept phase are reviewed and broken down into actionable tasks. The next section is a brief timeline showcasing the evolution of software development methods through the ages. His analytical engine was the world’s first computer hardware. NATO held two Software Engineering Conferences in 1968 and 1969. MSA is a distributed network architecture that enables horizontally scaling and network redundancy. DevOps is a modern field of software development which focuses on support and automation for supplementary software development tasks. 354 A BRIEF SURVEY OF MODERN SOFTWARE DEVELOPMENT PROCESS MODELS developers model the structure and interaction of the objects needed to imple-ment the requirements (Figure A.1). And the deployment itself … Agile management methods are applied on top of these technologies to help coordinate and manage integration and release. The device can read or write to the tape cells and move the head to the next or previous cell. About This Blog. Many agile frameworks that provide specifics on development processes and agile development practices, aligned to a software development … Software Development: modern practices and where it’s headed. Use version control on your code (e.g., git). The 1990s saw rise in management quality initiatives. NetFore News 2. Improved Performance And Productivity. Organizations are finding that higher quality, well designed, user empathetic programing languages can lay a foundation for higher quality, well designed user empathetic business product output. Given a well designed up front formal business domain type specification and a soundly typed language. Agile arose out of frustration for the ‘monumental’ methodologies of the past. New trends in software development Microservices: down with the monolith On average, software & IT teams use 4.3 tools to move code from development to customer-facing production. Learn more about extreme programing rules on the official extreme programming site. 1. • For many of us, we don’t care about buzzwords. Not For Profit, Full-stack Software Development The guardrails that enable CI/CD are version control systems, automated tests, and monitoring tools. Adoption of Agile development practices has given rise to new disciplines of software development. Atlassian's guide to modern software development, Object Oriented Programming was a mistake, Continuous integration and Continuous development. Our iterative approach ensures our customers benefit from continuous delivery of working software from the early stages of the development cycle. Business Design & Development-January 20th, 2020. This process is in-effect during steps 3 and 4 from the Software Development Lifecycle. We use modern software development practices to build your core business applications. It has been so successful that Agile ideas and culture are continuing to expand to other areas of business like design and product development. The GNU/Linux project brought about completely disruptive distributed team workflows and methodologies. 5. 2. Modern stack? Most of us have a sense of what makes up a modern application, but it’s worth positing a definition for the sake of the discussion. This allows for mobility, collaboration, backup, and enables all sorts of other modern web development practices. This new hosting paradigm enables the direct upload and execution of individual code functions. Extreme programing stresses customer satisfaction as the guiding force for development iteration cycles. Requirements definition. it is emerging with the support of a pro-lean subculture within the Agile community. “Monumental” methodologies like Waterfall and SDLC gain widespread usage. Better, faster and more transparent: a typical modern delivery approach for software development Modern delivery approaches for software development focus on the entire value chain, combining a mixture of Design Thinking, Lean, Agile and DevOps practices. It's a change to prioritize the customers and users experience with the product. Benefits of modern software development. These principles are applied to both small software products developed by one team and to large ones developed by programs consisting of over ten teams. Our experienced project managers work closely with you to ensure your project stays on track and on budget and is delivered with the highest quality. We assign an experienced project manager who tracks and reports on all aspects of the project from end to end. 3. Additionally, reading this document is already an indication that you are ready to take the first steps and get started! Between 1842 and 1843, Lovelace produced with an elaborate set of notes on the analytical engine. The book is on software engineering and project management. ODD defines goals instead of tasks and assigns ownership of those goals to a team which will be responsible for meeting that goal and implementing. Some of these principles are Python-specific, but most are not. These methodologies are considered slow and ridged and appropriate for building monuments. If you’ve ever experienced a team planning meeting where the general consensus was “why are we building this?” It might be time to try outcome-driven development. TQM (Total Quality Management) and Capability Maturity Model are methods under the Six Sigma management. ODD defines goals instead of tasks and assigns ownership of those goals to a team which will be responsible for meeting that goal and implementing. Lean offers a solid conceptual framework, values and principles, as well as good practices, derived from experience, that support agile organizations. We use the best tools and technologies available to help you address your business challenges in the most efficient and effective way possible. 1. Bitbucket offers code review tools which encourage iterative quality improvement through team discussion. Planning/Roadmap Stakeholders are identified, budgets are set, and infrastructure is requisitioned. Discussions are being had about mistakes in language design. Process. Using modern software development practices enable us to find and fix bugs faster and iterate. A modern application is Extreme programming is a derivative of the Agile process. Moving applications around between cloud machines or hosting providers was a risky and tedious move. The following practices have been adopted by some of the most successful enterprise software companies. 4. CI/CD pipelines are utilized to ensure efficient developer experience. The conferences were attended by international experts who agreed on best practices for software engineering. Incident Management Deprecation and End-of-life activities, including customer notification and migration. Amazon CI/CD Practices for Software Development Teams - AWS Online Tech Talks - Duration: 47:50. Approach 5: Enterprise Transformation. Agile has been fantastic at optimising the development process. Bitbucket offers collaborative code review tools and CI/CD pipelines which plugin to the code review process. Modern software development often uses Agile and Lean principles which focus on the customer’s requirement for continuous delivery of new functionality. 3. More like a bunch of guidelines & principles. Concept Projects are envisioned, designed, and prioritized. A litmus test I gave clarity on what I believe constitutes an Agile project and argued that just because you work on a project that uses modern software development practices (which many mistakenly refer to as 'Agile') doesn't mean your project is Agile. Let's build your business together. Learning software development can be a great exercise to open up new avenues for career growth. We use a combination of automated testing and user testing to ensure the product meets the highest standards for quality and usability. These projects maybe identified from software you already use and enjoy. Most modern business operations have some level of human-computer interaction. Understanding the design and implementation of how software works can help an individual operate more efficiently in personal and work life. Structured programming aimed at improving the clarity, quality, and development time of a software project. Deploy/Release and Hosting Once code has been approved and merged it’s time to ship it. A regular planning period is conducted in which expectations are … It has become about following best practices to produce high quality code, resulting in high quality applications which means happy users and customers. Business 4. 6. Software development today is generally executed with a complementary agile project management process. Continuous integration and Continuous development, are the premier examples of the value of automation. Adrian Bridgwater. Third party repository hosting services like Bitbucket have become central hubs for teams to use distributed asynchronous communication patterns around. Countess Ada Lovelace is often credited with writing the first software algorithm. The tasks are executed and adjusted during a sprint period. This process encompases the design, documentation, programming, testing and ongoing maintenance of a software deliverable. Structured programming emphasized use of the structured control flow constructs of like (if/then/else) and repetition (while and for), functions, and subroutines instead of using goto statements and conditional tests. Jira Service Management provides powerful tools to capture, triage, and resolve customer support requests. Containerization is an emerging trend that automates hosting and deployment responsibilities in DevOps (the automation of developer support duties like infrastructure management). Code/Review/Test Development teams work to build production ready software that meets requirements and feedback. Along the way you will be exposed to agile software development practices, numerous tools that software engineers are expected to know how to use, and a modern web application development framework. Code/Review/Test The development team works to produce production ready software that meets requirements and feedback. Unfortunately, there’s no single indicator with which to measure how “modern” a software company is. What’s common across modern development organizations. Deployments were risky affairs where teams would manually copy files between servers and the network could fail or desync a deploy across a cluster. Modern software development is not just about using the latest tools and frameworks, it's more about how and why use them. IT Support Support and maintenance is required of active software projects. Before software development became a craft with a history and doctrine, the concept of software first needed to be created! As Atlassians, we've had the opportunity to interview thousands of customers about modern software development processes. Overall these trends are lowering the costs required to develop new projects and lowering the barrier to entry for non-technical team members to contribute to software development. Modern applications use cloud hosting provided by Amazon AWS, Google Cloud Platform, or Microsoft Azure. To validate our findings, we surveyed software development leaders to understand which practices are at the heart of the highest performing teams. Topics Covered. This is also true for mobile application development. A disruptive book, The Mythical Man-Month: Essays on Software Engineering, is published in 1975. Software development has a surprisingly rich history. Amazing advantages of using modern software development practices in your organization. Version Control. This post offers a primer into some of these modern practices. Deploy/Release and Hosting With code approved and merged, it’s time to ship it. A regular planning period is conducted in which expectations are set, dependencies are addressed and tasks defined. Stay up to date with Stackify’s blog for tips and tools to make you a better developer. Formal verification is the process of proving or disproving the correctness of algorithms in an underlying system. Planning/Roadmap Stakeholders are identified, budgets set, and infrastructure requisitioned. After the analysis and design is complete, the team implements the design We build innovative software for our customers using cutting-edge software development tools and techniques. Data Insights Twitter. The conferences produced two reports that defined how software should be developed. Our design team conducts user research, defines user flows, creates wireframes and develops UX and UI designs that delight users. DevOps teams build tools to automate and maintain mundane software development chores like infrastructure maintenance. It can be thought of as a mathematical process like verifying an algebraic expression. This new process greatly simplifies the development-to-production release pipeline. • How do we put it into practice? In the paper Turing presented that a Turing machine could solve any problem that could be described by simple instructions encoded on a paper tape. Software was delivered to the analytical engine through punch cards which denoted computations the machine would execute. Emerging trends such as feature flagging, continuous delivery, and others, Software development’s evolution throughout the 20th century, Applying the popular management framework, Extreme programming, lean development, and the waterfall model. The rise of cloud hosted infrastructure has brought a new ease to deploying traditional server software stacks. We help you identify key issues and opportunities for your organization and work with you to implement intelligent solutions that provide measurable business benefits. To maximize the benefits of building an enterprise app you have to follow the modern approach. IT Support Ongoing support and maintenance is required of active software projects. Extreme programing follows a five step iteration cycle. She also developed a vision of computers to go beyond mere calculating or number-crunching. Businesses these days are finding it much challenging to run them at the platforms where they could survive for longer. “Modernity” is a spectrum. The motivation of this restriction comes from the prohibitively expensive phases of execution in physical environments. It is a highly structured process that strictly restrains movement from one phase to the next. CWDN series: what defines modern software development practices? Facebook. The meaning of visibility is that production of good quality, consistent, and standard documents, that makes software project management easier. Software is a much more forgiving and fluid end product than manufacturing. Partners Many modern software development practices can be beneficial to any type of project. Actions: Wholesale decentralization and re-organization, including full automation of software testing and delivery, introduction of quality engineering and Lean Startup practices. It can be intimidating for outsiders and newcomers to approach. The following things are common among most people I spoke with. Many modern networked applications have adopted a Microservices architecture (MSA) infrastructure. Software development is a technical craft with a steep learning curve and deep history. This pathway integrates modern software development practice such as Agile Software Development, DevSecOps, and Lean Practices. 6. Continuous Improvement methodologies like Six Sigma encouraged strict guidelines for quality assurance. Teams would have to collaborate and sequence when to merge features and think about avoiding conflicts in code updates between team members. This outreach has enabled us to uncover customer needs, pain points, and future plans in order to build our product roadmaps meaningfully. As Atlassians, we've had the opportunity to interview thousands of customers about modern software development processes. This prompted IBM to decouple their software and service business from their hardware sales business. During the sprint period these tasks are updated as they progress to completion. Software development today is generally executed with a complementary agile project management process. What does “modern” mean exactly? While there are other great processes that effective teams often follow, the above four elements are critical to effective modern software development. Why Modern Software Development Practices Are Good For Your Business? Confluence is a great tool to develop product research documents and share design files during this stage. If you're still not convinced, here's why modern software development practices are good for your organization. Team Extension, Modern Software Practices Typically with "normal" software designs today, you would need some sort of placeholders for your external dependencies, but now your tests are reliant on those placeholders being correct. In the digital fast-paced era, the company that doesn’t want to lose big to its more innovative competitors has to embrace innovation. These notes contain what many consider to be the first computer program. This book includes case studies and real-world practices and presents a range of advanced approaches to reflect various perspectives in the discipline. We define the product architecture, which includes all major design and technology decisions and acts as the blueprint for your product. In my last post Am I on an Agile project? A modern approach to enterprise mobile application development. In order to tackle these modern goals, we found that software teams are making use of 4 main software development practices, and will continue to do so in 2019. The live production code will need a place to live. We have expertise across the full development lifecycle. In 1936 Alan Turing invented the Turing Machine. They represent the non-aspirational state of software development. If you lack any one of the above four, you will not be able to deliver a constant velocity with minimal regressions with a scalable team for an extended period of time. ... Modern Software Delivering Business Value at Startup Speed - Duration: 36:03. The rules are Planning, Managing, Designing, Coding, and Testing. The first standard software methodologies emerge; Study Organization Plan from (IBM) and Accurately Defined Systems from (NCR) Both companies publish printed books and distribute them to employees. Git was created to help manage these new distributed projects. This leads to less bugs and higher quality software. Concept Projects are envisioned, designed, and prioritized. If you’ve ever experienced a team planning meeting where the general consensus was “why are we building this?” It might be time to try outcome-driven development. Technologies, 68 Chamberlain Ave, Suite 200 Ottawa, ON, Canada, K1S 1V9, Contact us today and let’s talk about your project. But if it’s faster software deliver you’re after, this approach gets the job done. What follows is an overarching summary and perspectives of the software … A device which has two primary components: a strip of tape divided into cells and a head which points to the tape cells. Software is everywhere these days: phones, TVs, cars, vending machines, coffee makers, and pet toys all offer some software driven features. 1. Before CI/CD gained popularity merging code and deploying were a much more cumbersome process. Agile takes advantage of these properties and provides a complementary management methodology. We create a detailed statement of work (SOW) that serves as the blueprint for the entire project. This pipeline is known as the Software Development Lifecycle. Learn from enterprise dev and ops teams at the forefront of DevOps. 5. Turing later published a paper: “On Computable Numbers, with an application to the Entscheidungsproblem," which became the foundation of computer science. Before feature flagging, teams would push entire features out to all production users as part of a regular code release. Don’t let this deter you from learning either the basics and/or higher level topics—there are many resources online that offer guides on learning Agile development. I also read about RSpec for auto/unit/whatever tests (I haven't written a single test in my life). Structured programming, a precursor to Object Oriented Programming, rises in popularity. Its central theme is that "adding manpower to a late software project makes it later.”. To combat this, many modern software development practices are beginning to employ the use of design systems to provide an outstanding experience to the end user and streamline processes internally. The art of software development has many deep schools of thought. The live production code will need a place to live. This outreach has enabled us to uncover customer needs, pain points, and future plans in order to build our product roadmaps meaningfully. ODD is a workflow process that encourages rapid, lightweight software development. Most of the software development trends in 2020 require more than just basic programming knowledge, but it’s never too late to add additional competencies to your toolbox. A developer can write and upload a simple code function that takes input and returns output. The immediate future of software development is directed by a few core values: automation, transparency, and democratization. We all love workplaces brimming with sky-high results and roaring with the sounds of productivity. The inception of software development is often traced back to Charles Babbage the mid-1800s. All of these products have been created by groups of people that have organized with the goal of making electronic signals behave in a desirable pattern. A reemergence of lost and under utilized functional programing paradigms from the last century is underway. Cloud Enablement Modern Software Engineering Concepts and Practices: Advanced Approaches provides emerging theoretical approaches and their practices. Specialized organizational software for task tracking, like Jira, is used to monitor the state of individual task within a holistic sprint view. Outcome driven development is a workflow process that encourages rapid, lightweight software development. Understanding how your team can use these practices to increase the speed of development can lead to a competitive advantage. My passion is for testing, as I believe that good testing practices can both ensure a minimum quality standard (sadly lacking in many software products), and can guide and shape development itself. Enterprise design thinking. Feature flagging reduces the risk for deployments by allowing safe validation of features in a production environment before exposing them live to all live customers. Confluence is a great tool to develop Product research documents and share design files during this stage. foreSight Blog, Telecommunications DevFoundries is a leader in technical delivery. We develop and test the software that implements the detailed requirements for the project. In 2001 a group of software developers frustrated with the existing cumbersome management system, gathered and published the the agile manifesto. The waterfall model is a development process that originated in the manufacturing and construction industries. Computers gained extensive adoption in enterprise business. A new generation of developers are rediscovering languages like Ocaml, Haskell, and Lisp. Before containerization gained adoption developers would have no guarantees that their application would behave the same on different machines. Today, as the concept of a modern software company continues to evolve, we need new ways to measure where companies fall on the road to modern software development. A microservice implementation will break an application in to separate deployments that correspond to business needs. One of the best exercises for getting started is to observe, or participate in, a successful open source project, such as Git. Additionally, Bitbucket has CI/CD pipelines which plugin to the code review process. Modern applications? A basic knowledge of software development is becoming more and more valuable. Languages like idris, coq, agda are exploring these ideas. CI/CD pipelines ensure efficient developer experience. Generative tools can be used to write a code base that matches the formal specification. Feature flagging is a practice which enables ‘soft releases’ of new code. Our software development process has been honed by 25+ years of on-the-keyboard software development. LSD is adapted from the Toyota Production System. Modern Software Development Practices Getting the right things done right; Michael Cheng @CoderKungfu email@example.com; None; Agility in Practice • The Agile Manifesto is not very prescriptive. Methodologies like Waterfall and TQM were born in the slower moving, less forgiving industries of physical goods manufacturing. Also, now you have a new type of code asset that is potentially extremely expensive to maintain but contributes no value in … Software development today primarily builds off the workflows established in the 90s open source world. About These are the trends and practices that are enabling software companies to execute next level vision and delivery of product. Jira and Trello manage, track, and organize these task lists. The modern software development practices emphasize better visibility of design and code. This process is in-effect during steps 3 and 4 from the Software Development Lifecycle. The 90s saw the dawn of open source software. 130 Views. In 1969 the U.S. Department of Justice filed an antitrust suit against IBM. Trends and best practices for provisioning, deploying, monitoring and managing enterprise IT systems. Consider using cloud hosting provided by Amazon AWS, Google Cloud Platform, or Microsoft Azure. In short, software development is the overall process involved when taking a software project from conception to production delivery.
Gnome Remove Ubuntu, Afwan In Arabic, Nikon Z6 Competitors, Agile Methodology Step By Step Process, Squier Affinity Stratocaster Arctic White, Baked Beans Specials, Thai Square South Kensington, Stamp Act Cartoon Drawing, Traditional Tuscan Bean Stew,
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703533863.67/warc/CC-MAIN-20210123032629-20210123062629-00057.warc.gz
|
CC-MAIN-2021-04
| 28,556
| 2
|
https://gdg.community.dev/events/details/google-google-developer-group-cloud-gdg-silicon-valley-presents-free-2-day-bootcamp-python-for-data-science/
|
code
|
This coming October 27 & 28th, we welcome to join our upcoming cohort for a hands-on developer bootcamp where you will solve machine learning problems from beginning-to-end using Logistic Regression. This is a FREE 12-hour bootcamp [9:00 am - 4:00 pm] spread over two days.
Professional trainers will teach you how to use python effectively. This workshop explores Python's place in the scientific ecosystem, and how the language, with several readily available open-source libraries, can serve as a powerful tool for data analysis.
Why Python for Data Science is Important?
Python is a general-purpose programming language that is becoming more and more popular for doing data science. It is often the choice for developers who need to apply statistical techniques or data analysis in their work, or for data scientists whose tasks need to be integrated with web apps or production environments. In particular, Python really shines in the field of machine learning. Its combination of machine learning libraries and flexibility makes Python uniquely well-suited to developing sophisticated models and prediction engines that plug directly into production systems.
One of Python’s greatest assets is its extensive set of libraries. Libraries are sets of routines and functions that are written in a given language. A robust set of libraries can make it easier for developers to perform complex tasks without rewriting many lines of code.
What’s the Job Market for Data Scientists Like?
With millions of worldwide job openings in Big Data, the role of a data scientist has become the hottest job of the decade. In today’s data-based world, companies are using the insights that data scientists provide to stay one step ahead of their competition while keeping overhead costs low. Big names like Oracle, Apple, Microsoft, Booz Allen Hamilton, State Farm, Walmart, and more all regularly have job postings for data scientists.
According to Forbes, for most of 2016, there were an average of 2,900 unique job postings for data scientists each month. According to a McKinsey Global Institute study, it’s predicted that by 2018, there will be almost 200,000 open positions.
What are the Topics Covered?
Basics: Variables and Elementary Types, Operations, Console and Functions
Data Structures: Tuples, Lists, Sets, Dictionaries/Maps
Control flow statements: if, for, break, continue and else statements, while loops
File handling: File I/O and context managers
Exception handling: try, except and finally statements, handling and raising exceptions.
Who Can Attend?
If you have a desire to learn new things, have a programming background/
For the last 25 years, Venkatesh has been working in various domains and various technologies with DATA as a common theme. Starting with Data Warehouses, proceeding on to Data Mining, Business Intelligence and now Machine Learning, Deep Learning & AI.
He successfully co-founded and exited a couple of startups so far. One of which is of Business Intelligence for Enterprises and the other is an Insurance sector product. Currently, he is invested in a few startups in the ML area and also sits on the boards of a few more.
Venkatesh has a Masters in Computers Science and an MBA. He brings his formal education and experience, combined with his passion for DATA to develop Predictive Analytics capabilities to his enterprise clients in pharmaceutical and insurance verticals.
Venkatesh in his spare time also follows his passion for teaching by conducting workshops in Machine Learning where he coaches aspiring students in the joy of DATA.
Head of AI lab
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656675.90/warc/CC-MAIN-20230609100535-20230609130535-00423.warc.gz
|
CC-MAIN-2023-23
| 3,597
| 21
|
http://radbonobos.org/Projects
|
code
|
Our past and current projects
DIY workshops DO:TOPIA
DO:TOPIA is a project aiming to empower women and minorities in IT by hosting open workshops with these groups' needs in focus.
Media & resource site killjoy.dk
Killjoy is a small site with user-driven blogs and occasional niche news coverage. Bonobo Radical Collective is part of the editorial staff and cover running costs for the site.
Slutwalk Copenhagen 2016
In 2016 we took part in organizing a the reboot of an international protesting tradition against rape culture and slutshaming in Copenhagen. Slutwalk had previously been held in 2011.
We Can Edit in 2015
We were involved in organising the Wikipedia edit-a-thon We Can Edit in 2015 focusing on the Wikipedia gender gap in both content and editing.
Website for MIX COPENHAGEN
We help out with updates and repairs on the film festivals wordpress platform and cooperate through our other projects.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488525399.79/warc/CC-MAIN-20210622220817-20210623010817-00479.warc.gz
|
CC-MAIN-2021-25
| 910
| 11
|
https://tothemathlimit.wordpress.com/tag/triangles/
|
code
|
The word I couldn’t think of for the triangle segments that intersect is concurrencies.
I ❤ SBG. One change I'm going to make (pace Cornally) is to make the first quiz feedback only. The second quiz is the grade. This will make it much easier to explain and track.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191444.45/warc/CC-MAIN-20170322212951-00246-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 268
| 2
|
https://flutterawesome.com/a-material-designed-music-player-developed-in-flutter-2/
|
code
|
10 November 2018 / Music Player A Material designed music player developed in Flutter Grey A Material designed music player developed in Flutter. Plugins Music player plugin used : Flute-music GitHub — Flutter Awesome — Music Player 06 June 2019 A Flutter music app made with Provider and BLoC pattern A Flutter music app made with Provider and BLoC pattern. 25 May 2019 Minimalistic local music player built with flutter for android It uses the audioplayer plugin to play files, and path_provider to locate the external directory and search it for playable files. 27 February 2019 A complete music player in flutter with cool UI and design A complete and open source music player designed in flutter. It is first complete music player designed in flutter. — Next Post — 10 November 2018 Simple and open-source SpaceX launch tracker SpaceX GO! - codenamed Project: Cherry, is an open-source unofficial SpaceX launch tracker app, built for fun and with educational purposes. — Prev Post — 08 November 2018 Flutter plugin for jumping to system settings A Flutter plugin for jumping to system settings.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999041.59/warc/CC-MAIN-20190619204313-20190619230313-00396.warc.gz
|
CC-MAIN-2019-26
| 1,111
| 1
|
https://www.techrepublic.com/blog/the-enterprise-cloud/seven-storage-design-flaws-that-could-land-you-in-hot-water/
|
code
|
If you're designing a new storage system, read about these seven storage gotchas that could lead to you having a lot of time on your hands.
Designing a storage solution isn't a trivial undertaking; there are many moving parts, many decisions to be made, and just as many mistakes that can be made. Here are seven mistakes that might lead to you getting in trouble.
1. Not taking RAID storage overhead into consideration.
Unfortunately, I've actually seen this happen. Any responsible storage implementation will probably use RAID to protect against the loss of one or more disks. With the exception of RAID 0, which is just a bunch of disks strung together to create a larger storage pool, all RAID implementations result in storage-related overhead that is used for mirror or parity information. The storage overhead requirements can be substantial. For example, in a RAID 1 implementation, 50% of the total disk space is used to copy the information to the mirrored set of drives. RAID 10 -- an extension of RAID 1 that stripes data across multiple RAID 1 sets to improve performance -- exacts a 50% space toll but is frequently used due to its significant performance benefits. Don't forget to take into consideration RAID overhead when deciding how much storage you need to buy.
RAID storage penalty for common RAID levels:
- RAID 0: No storage penalty, but no protection either.
- RAID 1: 50% storage penalty (mirrored disks).
- RAID 5: 1/n storage penalty where n is the number of disks that make up the array.
- RAID 6: 2/n storage penalty where n is the number of disks that make up the array.
More information about RAID levels:
- Understand 'single digit' RAID levels
- Get the basics on multilevel RAID sets
- Build Your Skills: Know the differences between RAID levels
2. Not taking RAID performance overhead into consideration.
RAID exacts more than just a storage penalty; in addition to reducing the amount of usable disk space, different RAID levels also impact the overall performance of the storage system. Different applications require different storage performance characteristics. Different RAID levels are best suited to different kinds of applications. For example, because of the need to calculate parity for RAID 5 and RAID 6, those RAID levels are not always suitable for write-intensive tasks such as, for example, SQL Server log files. Choosing a RAID level that is not best suited for your application will not yield the best possible results.
In general, here are some pointers:
- RAID 1: Read: Good, Write: Good
- RAID 5: Read: Good, Write: Mediocre
- RAID 6: Read: Good, Write: Poor (double parity calculation and storage)
- RAID 10: Read: Very Good, Write: Very Good
Don't take this list to the bank, though; performance needs and characteristics vary wildly between applications, so do your homework!
- RAID storage explained
- Comprehending the Tradeoffs Between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations
- EMC CLARiion RAID 6 Technology: A Detailed Overview
- RAID 1+0 is the Cadillac of RAID
- Comprehensive RAID performance report
3. Not implementing a solution with enough spindles.
IOPS (Input/Output Operations Per Second) is a standard method by which storage performance is measured. While a lot of elements go into figuring out the total input/output capacity of a storage infrastructure, the number of spindles (a common way to refer to the number of disks in a storage solution) is one of the most important that you can design in. The more spindles you throw at a solution, the better the overall performance will be. Many people often assume that the transport mechanism -- iSCSI, Fibre Channel, etc. -- is the primary limiting factor from a performance standpoint, but this is often not the case. Each individual disk in your storage system is capable of a maximum number of IOPS. This maximum number is multiplied by the number of usable disks in your RAID configuration to arrive at a theoretical maximum IOPS value.
For some applications, you can figure out the number of IOPS that you need, but for other applications, you need to work with the vendor to arrive at a reasonable calculation. Without enough spindles to support your load, the rest of the storage design simply won't matter.
4. Choosing a RAID level that leaves your organization at risk.
For some, RAID had long been considered the gold standard when it comes to data protection; however, when used incorrectly, that protection might only be an illusion. Besides taking into consideration storage and performance needs, your RAID level needs to take into account the level of protection you want to maintain in the environment. RAID 5 is, by far, the most common level of RAID out there and, when used correctly, will provide organizations with a degree of protection. However, as drive sizes get larger, the risk of data loss increases pretty quickly. Since RAID 5 can tolerate the loss of only a single disk, losing two disks is a recipe for disaster.
For more information:
- There are some people that truly hate RAID 5... the group is named BAARF.
- How to protect yourself from RAID-related Unrecoverable Read Errors (UREs)
- RAID 5 Is A Cruel Mistress
- Why RAID 5 stops working in 2009
5. Using the wrong kind of disk.
I already indicated that you need to make sure you have enough spindles to support the needs of your application environment. Along with that spindle count, make sure you get the right kind of disks. From an IOPS perspective, not all disks are created equal. Further, from a reliability perspective, not all disks are created equal. SATA disks, for example, can be one or two orders of magnitude less reliable than SAS disks and create a much higher risk for data loss (read my URE article). Second, most SATA disks spin at slower rates than their SAS counterparts. Although there are enterprise-grade SATA disks that spin at 10K RPM, SAS disks almost always have a 10K RPM minimum speed and can spin as fast as 15K RPM. The faster the disk spins, the more quickly it can read and write information and, hence, the higher the IOPS value.
Note that there are tricks (such as short-stroking) that you can use to force more IOPS from a disk, but I'm not going to get into those here.
6. Not configuring a hot spare.
A hot spare is a critical part of a redundant storage system and provides the system with a way to immediately begin recovering from the loss of a disk due to hardware failure or some other catastrophe. The quicker that an array begins to rebuild after a failure, the less likely it is that the array will suffer another disk fault that could end up resulting in the loss of data from the entire RAID volume.
Using a hot spare results in the immediate loss of that disk as usable space in the array. With many people creating multiple RAID sets on an array, you might be concerned about losing a hot spare per RAID set. Many arrays will allow you to configure a global hot spare that can automatically take the place of any drive in any RAID set across the entire array, so you can minimize your hot spare overhead while continuing to meet availability needs.
7. Not implementing enough redundancy.
Depending on the way that your storage environment will be used, you will implement different levels of redundancy. For primary, high-need storage, make sure that you implement enough redundancy in the environment to meet business needs -- that may mean dual controllers, dual UPSs, redundant data paths to the storage, redundant replicated arrays and, much more.
When designing your storage, draw every component on paper. Then, in turn, place an X over each component and determine the impact if that particular component were to fail and, for each, component, decide if you need an additional level of redundancy. For example, at Westminster, we use a dual controller EMC AX4 iSCSI. The whole storage infrastructure is redundant from the controllers to the Ethernet switches that service the storage network. For each server that connects to the storage, we use multiple NICs and provide two connections to storage; neither connection uses a common NIC in the server. For example, we use one motherboard NIC connection and an add-in Ethernet adapter connection in order to protect against the failure of a single NIC.
- A look at an iSCSI-based highly available architecture
- A look at some more AX4/iSCSI availability diagrams
- EMC AX4 - A failover update
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00271.warc.gz
|
CC-MAIN-2021-10
| 8,447
| 48
|
https://applicate.eu/publications/312-statistical-predictability-of-the-arctic-sea-ice-volume-anomaly-identifying-predictors-and-optimal-sampling-locations
|
code
|
Ponsoni, Leandro; Massonnet, François; Docquier, David; Van Achter, Guillian; Fichefet, Thierry
This work evaluates the statistical predictability of the Arctic sea ice volume (SIV) anomaly – here defined as the detrended and deseasonalized SIV – on the interannual time scale. To do so, we made use of 6 datasets, from 3 different atmosphere-ocean general circulation models, with 2 different horizontal grid resolutions each. Based on these datasets, we have developed a statistical empirical model which in turn was used to test the performance of different predictor variables, as well as to identify optimal locations from where the SIV anomaly could be better reconstructed and/or predicted. We tested the hypothesis that an ideal sampling strategy characterized by only a few optimal sampling locations can provide in situ data for statistically reproducing and/or predicting the SIV interannual variability. The results showed that, apart from the SIV itself, the sea ice thickness is the best predictor variable, although total sea ice area, sea ice concentration, sea surface temperature, and sea ice drift can also contribute to improving the prediction skill. The prediction skill can be enhanced further by combining several predictors into the statistical model. Feeding the statistical model with predictor data from 4 well-placed locations is enough for reconstructing about 70% of the SIV anomaly variance. An improved model horizontal resolution allows a better trained statistical model so that the reconstructed values approach better to the original SIV anomaly. On the other hand, if we look at the interannual variability, the predictors provided by numerical models with lower horizontal resolution perform better when reconstructing the original SIV variability. As per 6 well-placed locations, the statistical predictability does not substantially improve by adding new sites. As suggested by the results, the 4 first best locations are placed at the transition Chukchi Sea–Central Arctic–Beaufort Sea (158.0◦W, 79.5◦N), near the North Pole (40◦ E, 88.5◦ N), at the transition Central Arctic–Laptev Sea (107◦E, 81.5◦N), and offshore the Canadian Archipelago (109.0◦W, 82.5◦N), in this respective order. We believe that this study provides recommendations for the ongoing and upcoming observational initiatives, in terms of an Arctic optimal observing design, for studying and predicting not only the SIV values but also its interannual variability.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038101485.44/warc/CC-MAIN-20210417041730-20210417071730-00145.warc.gz
|
CC-MAIN-2021-17
| 2,501
| 2
|
https://www.tdcommons.org/dpubs_series/2017/
|
code
|
In a programming language with support for garbage collection, a write barrier is a code snippet that maintains the key invariants of the garbage collector. The write barrier is typically executed after a write operation. The write barrier is computationally expensive and can impact program performance. This is true to a greater extent for languages where garbage collectors need to maintain multiple sets of invariants. For example, languages that employ garbage collection schemes with two collectors may maintain their invariants using multiple different write barriers. The techniques of this disclosure address the problem of maintaining multiple invariants by unifying the write barriers and by executing computationally expensive parts of the write barrier in a concurrent thread.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Lippautz, Michael; Degenbaev, Ulan; and Payer, Hannes, "Unified concurrent write barrier", Technical Disclosure Commons, (March 12, 2019)
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473518.6/warc/CC-MAIN-20240221134259-20240221164259-00204.warc.gz
|
CC-MAIN-2024-10
| 1,024
| 4
|
http://thereptilereport.com/enter-to-win-a-free-bearded-dragon/
|
code
|
The Reptile Report - This contest has ended. Please check The Reptile Report’s Facebook page for future contests. Enter to win 200 free large mealworms! How to enter: Like the ABDragons Facebook page...
Last Chance: Enter to Win a FREE Bearded Dragon
The Reptile Report - The contest is now over. Congratulations to Laura Waugaman for winning the bearded dragon and feeders! Check out The Reptile Report’s Facebook page for giveaways every Friday!
Enter for your chance to win a FREE hypo red possible het trans bearded dragon, 500 1/4″ Dubia roaches and 500 medium NutriGrubs!!
How to enter: Like the DubiaRoaches.com Facebook page and The Reptile Report’s Facebook page and share the contest post from The Reptile Report’s Facebook page publicly (click the big green arrow below to go to the contest post). We will choose one random winner from those who meet the requirements. The more you share, the better your chances are to win. You may share the contest post once per day.
The winner will be chosen on Thursday, June 1st, 2017 at 10:00PM EST. This giveaway is open to domestic USA residents only (sorry international friends, but shipping becomes too complicated outside the USA). Shipping is free and is included with the prize package.
Big shout out to The Dragons Den, who donated this amazing bearded dragon. Go ahead and give them a like to see some great content!
The winner must prove to us that they have both the knowledge and resources to take care of the animal properly before the animal will be shipped.@ the online source
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867277.64/warc/CC-MAIN-20180526014543-20180526034543-00474.warc.gz
|
CC-MAIN-2018-22
| 1,553
| 8
|
http://www.geekstogo.com/forum/topic/318579-is-my-ram-operating-at-the-right-mhz-and-timings/
|
code
|
G.SKILL Ripjaws Series 16GB (4 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model F3-12800CL9Q-16GBRL
It works fine (computer boots), when I run Speccy however I get the following as the recognized memory:
16 GB Dual-Channel DDR3 @ 795Mhz (11-11-11-28).
At first I thought this was running at half the speed it should be. Then I read some postings about doubling the actual speed to get the effective Mhz as rated. I was further confused when I read on newegg that the stated RAMs timings are 9-9-9-24-2N, as you can see from the Speccy posting it appears to be running at 11-11-11-28.
Is my Ram running at 1600Mhz? Is my timing less than optimal? I don't want to leave allot of speed on the table, if it is a simple matter of adjusting settings in the bios (all my current timing settings are set to auto, I checked before installing the RAM). I'm not interested in overclocking the memory, I just want to get the rated amount, and my motherboard claims to support 1600Mhz+.
Any feedback would be appreciated. Rest of the overview specs are as follows:
MS Windows 7 Home Premium 64-bit SP1
Intel Core i7 920 @ 2.67GHz 72 °C
Bloomfield 45nm Technology
16.0 GB Dual-Channel DDR3 @ 795MHz (11-11-11-28)
EVGA 132-BL-E758 (Socket 423) 58 °C
Hanns.G HG281ð„ ([email protected])
1280MB GeForce GTX 570 (EVGA) 37 °C
313GB Seagate ST3320620AS ATA Device (SATA) 32 °C
977GB Seagate ST31000333AS ATA Device (SATA) 33 °C
977GB SAMSUNG SAMSUNG HD103UJ ATA Device (SATA) 26 °C
1954GB Western Digital WDC WD2001FASS-00W2B0 ATA Device (SATA) 34 °C
HL-DT-ST DVD-RAM GH22NS30 ATA Device
NVIDIA High Definition Audio
Edited by bobbydoogle, 04 June 2012 - 07:48 PM.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743963.32/warc/CC-MAIN-20181118031826-20181118053107-00008.warc.gz
|
CC-MAIN-2018-47
| 1,674
| 20
|
https://skylar.tech/installing-mqtt-under-docker/
|
code
|
I use MQTT for all of the sensors in my Home Automation setup. It runs on my home server and all of my devices/sensors connect to it to publish their data. I then have software like Home Assistant or Node-RED use this data for doing automations. I also log all this data for later graphing in Grafana.
If you have never heard of MQTT think of a chatroom (like Slack or Discord) but for devices to communicate. There are channels and any client can subscribe or publish to any channel. This way you can have multiple devices communicating with each other without having to have each device know about the other devices. All it needs to know is what channels to subscribe or publish to. This is the Pub/Sub model.
This post is going to cover how to install a MQTT Mosquitto broker docker container using Community Applications within Unraid.
Installing in Unraid
Installing MQTT in Unraid is actually really easy as long as we have Community Applications installed. Go to
Apps and search for
MQTT and install the container from user spants.
Note: You will need to install the container and have it run at least once in order for it to create all of it's config files.
You can run MQTT without authentication but having an extra layer of security is always great and highly recommended. In order to generate users and passwords for our MQTT instance we need to go to our MQTT appdata path (default location is
/mnt/user/appdata/MQTT unless you changed it during install). What we need to do is add a new file called
passwords.txt inside this directory that we will then add our users and what passwords we want for them like so:
You can read the
/mnt/user/appdata/MQTT/passwords.README file for more information on how to format the
passwords.txt file if you are having issues (or comment below and I can help you out).
After that we need to edit the
mosquitto.conf file and change the line
allow_anonymous true to
allow_anonymous false in order to force users to authenticate to access the server.
You now need to restart the MQTT container from the Unraid web UI and the container will encrypt the credentials and move them into the
passwords.mqtt file (Do not add users directly to the
passwords.mqtt file, only delete and re-order users from here). The
password.txt file we created will then be deleted. Now you have your MQTT instance secured with authentication!
Testing it out
You can use the HIVEMQ Websocket Client page for testing out MQTT from your web browser. Just navigate to that link and put in your server credentials and you should be able to connect using port
9001 (this is the default unless you changed it). I use this tool all the time to test out my MQTT setup. Decent free tools are always nice :)
If you aren't able to connect then something is wrong with your setup. I recommend double checking all your container settings and checking your container logs. If you need more help figuring out a problem feel free to comment below and I will help you out the best I can.
Resource usage of this MQTT server are really low. My instance is currently only using 1.785 MiB of memory and very rarely goes over
0% CPU usage. You could get away with running this on some really low-end hardware without any issues (Raspberry Pi and other devices come to mind as cheap reliable brokers). I recommend checking out Eclipse Mosquitto project's website for more information.
MQTT is amazing especially if you have a network with a lot of sensors that you need live data from (and handling of detecting if specific device is online/offline by using Last Will and Testament messages). I wish systems like Hue supported this protocol so that you didn't have to query their bridge over HTTP constantly to check for motion events and light states. I had to make a post about fixing slow Home Assistant Hue Motion sensors that wouldn't even been an issue if the bridge supported MQTT or something similar.
I use MQTT for tons of devices on my network. I have around 25 various physical sensors constantly publishing data to Home Assistant via MQTT (most are motion, temp, and door/window sensors). I also have a ton of virtual sensors created through Node-RED that also send data into Home Assistant via MQTT.
This setup has a really fast response time. I have some lights that turn on when doors open and I notice the lights turn on nearly instantly. Using MQTT has been incredibly reliable for me and I have been using it for nearly 2 years without any issues. It handles all of my automatic lights in my house, I don't really use light switches anymore.
I really like that I can have multiple systems listening for the value of a device instead of having to program that device to send to a list of specific devices over something like HTTP. I just send the data out to a channel and whatever is listening on that channel will get it. This makes wiring things together much easier and efficient.
Have any feedback? Run into any issues? Feel free to leave a comment below and I will get back to you.
Found this helpful?
If you found this post helpful please do consider donating via one of the methods below. It really helps me fund future projects that I can write about for you guys and gals. Every dollar is greatly appreciated.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100710.22/warc/CC-MAIN-20231208013411-20231208043411-00416.warc.gz
|
CC-MAIN-2023-50
| 5,236
| 35
|
https://forums.adobe.com/thread/223975
|
code
|
I have been using on a laptop OSX 10.3.9 for a couple of
years no problem. I just added to a desktop OSX 10.5.4 and ignored
the register since this is registered. Worked at first but has
stopped. Contribute no longer works on either computer. FTP is
working. Older computer says Contribute has disabled your
connection to this website. Home page is blank and shows that page
can be edited. Cant see a blasted thing.
The newer computer shows the spinning beach ball though some
of the page loads first. Under Contribute>my connections the
computer icon has a red line through it.
I understand I can use on two computers, but only one at a
time. Cannot figure out how to set this up. Tech support no longer
supports this software.
Contribute 4 is supported. I bought Contribute 3 when it was
a Macromedia product, which may be why it's not supported. It's a
way to force everyone to go to 4.
You are suggesting a patch on OSX 10.5 is the problem? I am
using Contribute 3 with 10.3 as well and that has stopped working
too. This aspect is not an OSX 10.5 patch. I hate it when there is
more than one thing gone wrong.
I am supposed to be able to have installed on both, but cant
figure out how it's supposed to work. I am abiding by the
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219495.97/warc/CC-MAIN-20180822045838-20180822065838-00080.warc.gz
|
CC-MAIN-2018-34
| 1,233
| 22
|
https://strokemywookie.com/forum_threads/2825474
|
code
|
To use the codes you must login to the SWTOR website account page and select "Code Redemption" in the left hand menu (or use this link). Click the "Enter a Code" button on that page and copy and paste the code in. Some items are sent via the in-game mail system whilst others are applied to your account the next time you login.
If you know of any more then please post them below and I'll update the list. Let me know if any have expired.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887046.62/warc/CC-MAIN-20200705055259-20200705085259-00564.warc.gz
|
CC-MAIN-2020-29
| 439
| 2
|
http://www.talkstats.com/showthread.php/27473-IBM-Software-Support-Mastery-certification-exams?p=91168
|
code
|
My boss suggested me to take IBM Software Support Mastery exams for the certification, then I can strong me in my job. However, I do not know which exams should I take for this certification. If you know, just guide me.
View Tag Cloud
Advertise on Talk Stats
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382360/warc/CC-MAIN-20130516092622-00094-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 258
| 3
|
https://www.play4tomorrow.com/projects
|
code
|
The future of sports, education, well-being, and society.Explore Landscape
Every project here has been proudly co-created with someone under 18.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Want to add your project to our list?
OPTION 1: START FROM SCRATCH
Submit your own idea, use our training programs to grow it, and get featured upon graduation.
OPTION 2: BUILD WITH US
Using our training programs, work on other people's projects until you come up with your own idea and graduate.
OPTION 3: SUBMIT A PROPOSAL
Feel free to contact us about a special project, we are always looking for collaborations.
Want to make your own or help out?
Learn the creative confidence to make your own or contribute. Work on other projects or build something from scratch, hone your skills in one of our training programs. For example, most Academy cadets lead multiple projects across multiple summers.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100534.18/warc/CC-MAIN-20231204182901-20231204212901-00868.warc.gz
|
CC-MAIN-2023-50
| 944
| 13
|
http://csbsju.edu/chemistry/chemistry-faculty/anna-g-mckenna
|
code
|
Ph.D. - Clemson University, 1990
B.S. - Clemson University, 1996
Office: Ardolf Science Center # 250
Phone: (320) 363-5380
Email: [email protected]
CHEM 125: Introduction to Chemical Structure and Properties
CHEM 201: Purification and Separation Lab 1
I am interested in issues in chemical education such as gender differences in science, developing research experiences for introductory students, and developing effective pedagogy for teaching chemistry to introductory students. I am currently developing a web-based tutorial program for concepts in general chemistry.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809229.69/warc/CC-MAIN-20171125013040-20171125033040-00711.warc.gz
|
CC-MAIN-2017-47
| 570
| 8
|
http://www.hriradio.org/2015/07/thebuzz13.html
|
code
|
Hobart Radio International is a shortwave community service relayed across the globe via shortwave and FM in New Zealand.
We are the voice for Tasmania, and will always be developing and different. Crossing the borders since 2004, telling the unknown and investigating on what's important.
The Buzz Music Talk Show featuring DX News & Pirate Logs No.13
On today’s show we play a Little Britain sketch, learn cooking tips from the master Swedish Chef, relive the 80s, comedy from Monty Python, DX News: the latest HAARP News and shortwave pirate logs and we finish with Enya. LISTEN via embedded player: DOWNLOAD| TWITTER| FACEBOOK
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100016.39/warc/CC-MAIN-20231128214805-20231129004805-00607.warc.gz
|
CC-MAIN-2023-50
| 632
| 4
|
https://www.brimbox.com/module/set-layout-names/
|
code
|
Set Layout Names>>
The Set Layout Names module is for managing the Layouts of the database. Layouts are like regular database tables and are sometimes referred to in Brimbox as psuedo-tables. Besides defining a layouts existence, the administrator can remove layouts, change layout names, define parent-child relationships, set the layout order, and define security level for the layout. The Set Layout Names module is only available to administrators and can be selected from the Set Layout Names button on the left of the Admin tab.
- The first buttton is for submitting the form setting the current layout to the values entered in the form. The next button will refresh the form to the values currently stored in the database. The final button will run the Postgres VACUUM DATABASE command used to clean up the database.
- This area is for entering the singular and plural layout names for layouts. To remove a layout definition blank out the singular and plural names and submit the form with the Submit Layouts button. This will remove the layout but not the underlying data. The underlying data will need to be cleaned up using the functionality on the Backup and Restore module.
- This is where parent-child relationships are set up, done by pointing a child layout at its parent. If a layout is top level or a single entity this entry should be blank. Layouts here are referred to by letter and number, A1, B2, C3….where row type 2 is equivalent to layout B.
- This is for setting the order in which layouts are displayed, and must be set ascending strictly from 1.
- This is for setting the security of a layout. A secure layout has a secure value of 1 unless custom security is set using a Global Array.
- The Autoload functionality is not yet implemented and is reserved.
- The related checkbox enables a layout to be used as a related table. A related table also needs to be set in the related columns of another layout (columns c41-c46).
- Join relationships are implemented here. Note that in the join table the lesser layout number has its value in the first column and the greater layout must be in the second column.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000231.40/warc/CC-MAIN-20190626073946-20190626095946-00024.warc.gz
|
CC-MAIN-2019-26
| 2,135
| 10
|
http://stefanmikarlsson.blogspot.com/2009/03/treasuries-no-longer-considered-risk.html
|
code
|
Treasuries No Longer Considered Risk Free
Personally, I don't think there is any risk of default for a government that issues debts in its own currency and has a fiat currency with a floating exchange rate, as it can always print whatever money it needs to pay. That might lead to a de facto default, as the real value that investors get back is lower, but it won't be the kind of formal default that credit default swaps protects you from. So the people buying these swaps are wasting their money for nothing. But that's their problem, and the point remains that Treasury yields thus contain a risk premium, and cannot be used as the risk free reference rate.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125654.80/warc/CC-MAIN-20170423031205-00159-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 660
| 2
|
https://issues.apache.org/jira/browse/PROTON-1135
|
code
|
Dispatch router (which uses Proton-c) currently sends pipelined SASL and OPEN frames by default when connecting out to other peers using the ANONYMOUS mech, as shown in the following trace -
The AMQP 1.0 spec does not make it clear that this is supported (e.g. see diagram below) but in any case various components have shown difficulty with it (such as
PROTON-1135 just raised, and QPID-6639 which has yet to be included in a release but permitted the above protocol trace log).
Proton should by default disable sending pipelined OPEN frames for ANONYMOUS logins, to aid compatibility with other components that don't expect/handle such behaviour.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141211510.56/warc/CC-MAIN-20201130065516-20201130095516-00270.warc.gz
|
CC-MAIN-2020-50
| 648
| 4
|
http://kimcolemanphd.com/
|
code
|
Welcome to my website! I am an interdisciplinary scholar focused on the nexus of natural resources and civic engagement. I received a B.S. and a M.S. from the University of Vermont in environmental studies and natural resources, respectively. I also hold a Ph.D. in forest resources and environmental conservation from Virginia Tech. I am currently a Postdoctoral Associate in Dr. Rachelle Gould’s research group at the University of Vermont, working on questions related to collaborative forest planning and management, sustainability education, equity, and cultural ecosystem services. Please explore this site to learn about my teaching, research, and service.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986710773.68/warc/CC-MAIN-20191020132840-20191020160340-00311.warc.gz
|
CC-MAIN-2019-43
| 665
| 1
|
https://community.smartsheet.com/discussion/117202/tracking-time-by-project
|
code
|
Tracking time by project
I am familiar with simpler Google forms where we read the results in a spreadsheet. I'm new to smartsheets, and overwhelmed by the number and complexity of the offerings.
Starting small, I'm tracking one employee (so far) and want to known how she's allotted her time daily, over the week and bigger intervals..
I'd prefer the employee to fill out a form, and the results in a spreadsheet where I can enable rollup by project and time.
Pointers to solve this (simple) issue would be appreciated.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476532.70/warc/CC-MAIN-20240304200958-20240304230958-00337.warc.gz
|
CC-MAIN-2024-10
| 520
| 5
|
https://www.myjobmag.com/readjob/6075/jobs/oracle-nigeria-is-hiring
|
code
|
Oracle - Provides the world's most complete, open, and integrated business software and hardware systems and work with all 100 of the Fortune 100.
Go ahead, amaze us. When you provide the world's most complete, open, and integrated business software and hardware systems and work with all 100 of the Fortune 100, you have pretty high standards. That's why at Oracle, we seek only the top sales talent to join our team. In return, we provide the opportunity for you to showcase your talent as you enjoy the rewards of selling technology that is the envy of the industry. Join us and be part of the best sales force in the business.
Change is good. This change is even better. If you feel like you've hit the ceiling of your current job, join the company whose potential is virtually limitless. Oracle is the global leader in advanced business software, hardware and middleware solutions. In fact, we help drive the success of all 100 companies in the Fortune 100. If you're a highly ambitious sales professional looking for more from your career, we'd like to help drive your success too.
We are recruiting to fill the position below:
Job IR: 2458446
Sells a subset of product or services directly or via partners to a large number of named accounts/non-named accounts/geographical territory (mainly Tier 3 accounts).
Primary job duty is to sell business applications software/solutions and related services to prospective and existing customers. Manage sales through forecasting, account resource allocation, account strategy, and planning. Develop solution proposals encompassing all aspects of the application. Participate in the development, presentation and sales of a value proposition. Negotiate pricing and contractual agreement to close the sale. Identify and develop strategic alignment with key third party influencers.
Interested and qualified candidates should Click here to apply online.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865023.41/warc/CC-MAIN-20180523004548-20180523024548-00102.warc.gz
|
CC-MAIN-2018-22
| 1,900
| 8
|
https://github.com/amazeeio/lagoon/pull/1432
|
code
|
Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
use newer check if drupal installed #1432
this might fix the postgres issues we see
Explain the details for making this change. What existing problem does the pull request solve?
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251779833.86/warc/CC-MAIN-20200128153713-20200128183713-00212.warc.gz
|
CC-MAIN-2020-05
| 336
| 5
|
https://gamesdb.launchbox-app.com/games/images/132455-spike-hoppin
|
code
|
Spike Hoppin' is a game where you make Spike hop on blocks, and change all their "colors". Spud returns and will try to stop Spike, as well as other enemies and a few friends too.
Spike Hoppin' includes digitized speech (Spike talks again!), as...
Images should only be deleted if they are duplicates, don't match the game they are assigned to, or are of very poor quality.
The image type is required and is crucial to classifying the image correctly. Please read through all items in the list to ensure you select the correct one.
Region is not required for all image types, but helps to classify box art images especially. If an image has an ESRB rating, then you can be sure it is an image from North America. Images with PEGI ratings are always from Europe.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510454.60/warc/CC-MAIN-20230928194838-20230928224838-00126.warc.gz
|
CC-MAIN-2023-40
| 761
| 5
|
https://emacs.stackexchange.com/questions/48418/any-package-for-accessing-google-spelling-suggestion-api
|
code
|
Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Google spelling suggestion/correction is quite useful to me, e.g., see the screenshot in this question.
Is there any Emacs package / function to call the API for us?
Required, but never shown
3 months ago
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524254.28/warc/CC-MAIN-20190715215144-20190716001144-00411.warc.gz
|
CC-MAIN-2019-30
| 401
| 5
|
https://vulners.com/thn/THN:5FE224E03F0BFE896C88206E629D7A10
|
code
|
Dear Hackers, Warm up your keyboards! Because Facebook open Registration for third Hacker Cup 2013, an annual worldwide programming competition where hackers compete against each other for fame, fortune, glory and a shot at the title of world champion, with $5,000 top prize.
The qualification round begins on January 25th. So Participate and enhance your programming competency.
The dates have been set for Facebook Hacker Cup 2013
Registrations Page - <https://www.facebook.com/hackercup/register>
This is your chance to compete against the world’s best programmers for awesome prizes and the title of World Champion.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735836.89/warc/CC-MAIN-20200803224907-20200804014907-00414.warc.gz
|
CC-MAIN-2020-34
| 621
| 5
|
https://www.oscarli.one/
|
code
|
I’m a rising third year PhD student in the Machine Learning Department at Carnegie Mellon University advised by Professor Virginia Smith. I’m broadly interested in two aspects of machine learning:
how to endow machine learning models with more human-like intelligence;
how to make machine learning models more practically useful and reliable.
Towards these two goals, I’ve worked on areas including meta-learning, out-of-distribution generalization/evaluation, federated learning, privacy protection, and model interpretability.
Before starting my graduate studies, I graduated Summa Cum Laude with double majors in Mathematics and Computer Science from Duke University where I worked with Professor Cynthia Rudin.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585025.23/warc/CC-MAIN-20211016200444-20211016230444-00257.warc.gz
|
CC-MAIN-2021-43
| 720
| 5
|
https://www.whatuptime.com/community/technical-support/dedibox-sc-2016-windows-2016-no-access-after-deploy/
|
code
|
dedibox sc 2016 - windows 2016 - no access after deploy
I did deployed this image : Microsoft_Windows_Server_2016_Datacenter_Evaluation_64-bit_US_English.gz
I had impresion that's all things was good
[email protected]:~# wget -O- http://mirror.whatuptime.com/XXXXX/releases/Microsoft_Windows_Server_2016_Datacenter_Evaluation_64-bit_US_English.gz | gunzip | dd of=/dev/sda
Resolving mirror.whatuptime.com (mirror.whatuptime.com)... 188.8.131.52
Connecting to mirror.whatuptime.com (mirror.whatuptime.com)|184.108.40.206|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4596264608 (4,3G) [application/octet-stream]
Saving to: ‘STDOUT’
100%[====================================>] 4 596 264 608 26,8KB/s in 10m 45s
2017-09-08 12:17:09 (6,79 MB/s) - written to stdout [4596264608/4596264608]
31457280+0 records in
31457280+0 records out
16106127360 bytes (16 GB) copied, 655,813 s, 24,6 MB/s
But after i did passed in normal mode, I can't access by RPD to the sever. I did try to restart.... same result
I use the rescue mode and qemu and I can start the windows (in qemu). My impression is that the server, in normal mode, don't succeed to have DHCP IP...
Is there somedoby to help me 😉 ?
Template Name: Microsoft_Windows_Server_2016_Datacenter_Evaluation_64-bit_US_English.gz
Vendor Service Package: dedibox SC 2016
Physical or Virtual: Physical Server
Processor: Intel(R) Atom(TM) CPU C2338 @ 1.74GHz
Ethernet controller : Intel Corporation I210 Gigabit
Location: Online.net, France, DC5)
Error Information: After deployment : no access via RDP (try from different site, others servers still accessible via RDP)
With rescue mode (Ubuntu 14)
Qemu : wget -qO- /tmp https://ia601503.us.archive.org/12/items/vkvm.tar/vkvm.tar.gz | tar xvz -C /tmp
launch serveur in emulated mode : /tmp/qemu-system-x86_64 -net nic -net user,hostfwd=tcp::3389-:3389 -m 2048M -localtime -enable-kvm -cpu host,+nx -M pc -smp 2 -vga std -usbdevice tablet -k en-us -hda /dev/sda -boot c -vnc :1
With VNCviewer and public IP of the server, you can access to your server and view event log : Your computer was not assigned an adress from the network (by the DHCP Server) for the network Card with network address 0xmacadresse. The following error occurred: 0x79. Your computer will continue to try obtain an address on its own from the network address 5DHCP) server.
I had open a ticket at Online.net. They told mes that there are no problem on their infrastructure...
Where is the problem. Why this deployment image is not accessible ?
Unfortunately as it stands I do not have that answer available, however we are still actively troubleshooting with the intent of coming to a resolution.
As soon as we have any information to share, we will release it.
Apologies for in the unforseen issues.
Thanks to a generous member here I have had direct access to a SC SATA 2016 for the last few days allowing me direct access to an Online.net machine where the templates we provide fail to work as intended. I have spent countless hours troubleshooting at this stage and honestly I am truly at a loss as to the reasons for the problems being faced.
My research/troubleshooting thus far...
- The issue appears to be limited to the "SC SATA or SC SSD" in the personal range of dedicated servers provided by Online.net. I have confirmed the templates are working correctly on various other "XC SATA and XC SSD" servers.
- The templates successfully install Windows to the dedicated servers and Windows starts without error (It boots up correctly)
- The drivers for the Intel I210 NICs are properly included in the templates and the NIC's are being properly installed according to both the Windows event log and the device log (I booted the Windows installation via QEMU using rescue mode to review all logs).
- Upon Windows booting & NIC drivers being installed typically an IPv4 IP is assigned via DHCP from Online.net's network, however in the current case an IP is never assigned (This is quite unexpected, DHCP is working as expected on all other non-SC systems we tested).
- I have manually assigned the correct IP address to the NIC using Windows NETSH command via batch file and Task Scheduler. The IP is correctly assigned to the NIC, the server responds to ping (I disabled the Windows firewall completely), however I am unable to access to the server via RDP (Remote Desktop) as it hangs on "Securing Connection".
- In light of RDP failing to connect I installed both TeamViewer & VNC, both fail to connect even though the server responds to ping.
- When booting Windows via QEMU using Online.net's rescue mode all three connection methods work without issue (RDP, TeamViewer & VNC).
I am uncertain where to head from here, but I am continuing to troubleshoot as ideas come to me.
I have put manually IP (by scheduled task)
The server can ping an outside IP (220.127.116.11 by exemple)
When I Try to catch traffic with wrireshark. I can observe first exchange between client RDP and server
attach file RDP session with server in normal mode.
Thank you for this answer. No clue with the SC 2016 problem ?
Unfortunately I haven't a clue as to the cause of the underlying problem. A generous member here on the forums allowed me direct access to his non-working SC 2016 for over a week during which I spent countless hours attempting to resolve the issue, but unfortunately I made no progress at all.
The issue so far has only been experienced on SC 2016 server's with the Intel V210 NIC, all SC/XC 2016 server's with the Intel I350 NIC are working as expected.
The issue is either with the Intel V210 NIC (ie. Drivers) or a limitation in place on the Online.net network. The drivers report everything is operational and working as intended, however no matter the changes made network access is never accomplished. The IP for the server isn't being provided by DHCP either (odd for Online.net in my experience), however even when the IP is manually assigned to the NIC the issue is still present.
When I manually added the IPv4 IP to the NIC the server would respond to ping and RDP, however RDP would hang on "Securing Connection" and never completely connect. I had similar issues with both TeamViewer and VNC.
Something is whacked on the networking front.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590901.10/warc/CC-MAIN-20180719125339-20180719145339-00494.warc.gz
|
CC-MAIN-2018-30
| 6,256
| 53
|
http://homebrew.stackexchange.com/questions/tagged/off-flavor+wort
|
code
|
to customize your list.
more stack exchange communities
Start here for a quick overview of the site
Detailed answers to any questions you might have
Discuss the workings and policies of this site
What does scorched wort taste like?
I just did my first batch on a campchef propane burner http://cascadeclimbers.com/gear/hiking/product/Camp-Chef-Yukon-Stove.html Problem is that in the during the boil it never crossed my mind that ...
Mar 10 '11 at 1:18
newest off-flavor wort questions feed
Hot Network Questions
Use just a specific range of a colormap or shading in tikz
How to avoid being forked into oblivion by a more powerful contributor?
How can I delete everything after the first column in Notepad++?
Should one reject over-scoped projects?
Can humans survive without consuming life?
Why are methanol flames less visible than other flames?
The Symbols vs. The Letters
Is there a downside to deleting all of the broken symbolic links in a system?
Can my IT department read my Google Hangouts chats while at work?
How do I politely say I have used my mouth while drinking water from a bottle?
Can RHEL6 servers be "copied"?
Are sharks a threat to undersea cables?
Plotting a polar curve
Very large log files, what should I do?
If a baby is born on an international flight over international waters, what nationality are they?
Can "superhuman" move so fast that an average person cannot see them
Groupby in NumPy
BrainFuckedBotsForBattling - A Brainfuck Tournament
Why does void in C mean not void?
Why didn't this flush win the game?
Is there a word which means whatever you want it to mean? Or has no meaning?
Slim Kurepa tree at a singular strong limit cardinal of uncountable cofinality
A Sphere of Black Holes
Why does research cost so much money?
more hot questions
Life / Arts
Culture / Recreation
TeX - LaTeX
Unix & Linux
Ask Different (Apple)
Geographic Information Systems
Science Fiction & Fantasy
Seasoned Advice (cooking)
Personal Finance & Money
English Language & Usage
Mi Yodeya (Judaism)
Cross Validated (stats)
Theoretical Computer Science
Meta Stack Exchange
Stack Overflow Careers
site design / logo © 2014 stack exchange inc; user contributions licensed under
cc by-sa 3.0
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826343.66/warc/CC-MAIN-20140820021346-00271-ip-10-180-136-8.ec2.internal.warc.gz
|
CC-MAIN-2014-35
| 2,199
| 52
|
https://support.adeptia.com/hc/en-us/articles/207878933-Adeptia-Suite-applets-through-Proxy-Server-v6-0-
|
code
|
When accessing Adeptia located within a corporate network, sometimes these networks operate behind proxy server that requires authentication. This post explains how to configure the proxy setting in Adeptia so that you can open the Adeptia Suite applets such as Process Designer and Data Mapper etc, through a Proxy Server.
1) Go to the Administer Tab > Setup > Application Settings
2) Select Update System Properties
3) Under Applet Configuration, Select the applet you'd like to open through a proxy server. In this post, we'll choose the Data Mapper
4) Scroll to the abpm.dataMapper.proxy.host and abpm.dataMapper.proxy.port properties and enter in the necessary values
You can set these properties for each applet.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506669.96/warc/CC-MAIN-20230924223409-20230925013409-00145.warc.gz
|
CC-MAIN-2023-40
| 718
| 6
|
https://www.takecareofmoney.com/hardware-definition/
|
code
|
A hardware is made up of all the physical parts that make up a computer, those that can be touched in the first instance.
The hardware originated before the software, and it was Konrad Zuse in 1960 who managed to build a computer that worked correctly. From there is when you start talking about hardware.
A computer has become an essential element, especially following the boom of Internet and the new technologies. A software is responsible for giving a series of guidelines, which are executed by the hardware.
What are the components of a hardware?
These are the most prominent:
- Motherboard It is a fundamental part of the computer, without its existence it would be impossible for it to work. The other components are connected to it so that it can function properly.
- Processor. Its main mission is to run the operating system, other applications, and various components that are included in a computer.
- HDD. Its function is to store and save all the information that exists. There are SSD, SATA, and SAS hard drives. The former are the fastest, the latter are slower, but they can store a large amount of data, and the latter are durable, but more expensive.
- Power supply. Its main task is to convert alternating current into continuous so that the computer can function optimally since it needs a source with great power.
- Graphic card. It is located on the motherboard, and its main mission is to process the data and display it on the computer, such as images that can be seen on the screen. This is possible is thanks to its function.
- RAM. A fundamental piece since it includes all the pertinent instructions with which the processor works. Keep in mind that the larger the memory available to the computer, the better they can be carried out, and faster processes, and applications.
- Box. Also called chassis, and is the support in which all components are included. That way they are protected and arranged safely so that they work properly.
- Heatsinks Temperature is another issue to consider in computers, and it is these who are responsible for it. The most used to perform this function are: air, which are small fans, liquid, here the option is to work with a closed circuit with water that cools the system, passive, which is characterized by the inclusion of blades.
- Additional devices: For example, the mouse, the keyboard, speakers, etc.
All these elements are what make up the hardware, a key piece in any computer so that it works correctly. They are perfectly tangible electrical or mechanical elements, and they are also capable of being replaced or changed if their operation is not adequate.
Actions such as sending documents, transferring files, viewing images on the screen, surfing the internet, or carrying out other tasks are possible thanks to the hardware function.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475806.52/warc/CC-MAIN-20240302084508-20240302114508-00569.warc.gz
|
CC-MAIN-2024-10
| 2,815
| 16
|
https://lists.runrev.com/pipermail/use-livecode/2013-December/196327.html
|
code
|
richmondmathewson at gmail.com
Sun Dec 15 11:39:47 EST 2013
I'm having a lot of fun with this one.
Imagine a virtual keyboard if you will; perhaps a bit like those on iPads.
Now imagine that as I mouseEnter the button on the virtual keypad that
would correspond to the 'P' key
on an American English physical keyboard, but the 'П' key on a Bulgarian
keyboards, and all sorts of other
symbols on other physical keyboards.
a lowercase 'p' in both ASCII and Unicode is char 112, so it is dead
easy to have this sort of script in one's
button on one's virtual keyboard:
set the useUnicode to true
set the unicodeText of fld "Wozzit" to numToChar(112)
so that when the end-user does a mouseEnter over the button its keyboard
equivalent appears in a
display field called "Wozzit".
All fine and dandy, but come the chap who has an Armenian keyboard he
really wants to know that
the key is the 'պ' as he doesn't have a 'p' key.
So I started imagining I could do this sort of thing:\
get the keyDown for rawKeyDown 112
but I cannot.
I could have a preference script at the start of my program where the
end-user could set his/her keyboard layout; but that presupposes I know
all the potential keyboard layouts of my end-users, and am prepared to
spend the next 3 months setting up all the variants . . .
More information about the Use-livecode
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488544264.91/warc/CC-MAIN-20210623225535-20210624015535-00113.warc.gz
|
CC-MAIN-2021-25
| 1,336
| 28
|
https://fiverrpromotion.net/i-will-write-software-engineering-resume-software-developer-it-and-tech-resume-347/
|
code
|
Are you looking to land your dream job in the software industry?
Look no further! I specialize in creating professional resumes for software engineers, developers, IT professionals, and tech experts.
Why choose me?
- I have years of experience writing resumes for professionals in the software and tech industry.
- I understand the key skills and qualifications that recruiters and hiring managers look for in this field.
- I will tailor your resume to highlight your strengths and accomplishments in the best possible way.
What you will get:
- A customized resume that showcases your skills and experience.
- A professionally crafted cover letter to complement your resume.
- Keyword optimization to ensure your resume gets noticed by applicant tracking systems.
Don't let a poorly written resume hold you back from your dream job. Let me help you stand out and impress potential employers in the software industry!
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818732.46/warc/CC-MAIN-20240423162023-20240423192023-00423.warc.gz
|
CC-MAIN-2024-18
| 916
| 11
|
https://stemsalabim.readthedocs.io/en/5.0.0/whats_new.html
|
code
|
February 28th, 2019
The parameters application.verbose and simulation.skip_simulation are deprecated now. The groups adf/adf_intensities, cbed/cbed_intensities, and adf/center_of_mass now have a dimension for energy loss. It is usually 1 unless plasmon scattering feature is used.
- Speed improvements by increasing the grid sizes to match efficient FFT sizes. Note, that this may result in a higher simulation grid density than specified in grating.density parameter!
- Alternative parallelization scheme, see Hybrid Parallelization model. When appropriate, different MPI procs now calculate different frozen phonon configurations / defoci in parallel. This reduces the required amount of communication between the processors.
- Automatic calculation of center of mass of the CBEDs for all ADF points. The COMs are calculated when adf.enabled = true and stored in the NC file next to adf/adf_intensities in adf/center_of_mass. Unit is mrad.
- New executables ssb-mkin and ssb-run. The former prepares an input NC file from which the latter can run the simulation. This has multiple advantages. See Structure of a simulation for more information.
- Single plasmon scattering.
- Removed application.verbose parameter.
- Removed simulation.skip_simulation.
- Ability to disable thermal displacements via frozen_phonon.enable = false parameter.
- Fixed a serious bug with the integrated defocus averaging.
- Input XYZ files can now contain more than one space or TAB character for column separation.
- Removed Doxygen documentation and doc string comments.
- Default FFTW planning is now FFTW_MEASURE. This improves startup times of the simulation slightly.
- Changed the chunking of the adf/adf_intensities and cbed/cbed_intensities variables for faster write speed.
- Added AMBER/slice_coordinates variable to the output file, that contains the z coordinate of the upper boundary of each slice in nm.
- Removed HTTP reporting and CURL dependency.
- Significant code refactoring and some minor bugs fixed.
- Improved documentation.
STEMsalabim 4.0.1, 4.0.2¶
March 23rd, 2018 March 21st, 2018
- Bug fixes
- Changed chunking of the ADF variable
March 9th, 2018
I’m releasing this version as 4.0.0, but neither the input nor output files changed. The parameter precision has become deprecated and there is a parameter tmp-dir. Please see the documentation.
- Removed option for double precision. When requested, this may be re-introduced, but it slowed down compilation times and made the code significantly more complicated. The multislice algorithm with all its approximations, including the scattering factor parametrization, is not precise enough to make the difference between single and double precision significant.
- Improved the Wave class, so that some important parts can now be vectorized by the compiler.
- Introduced some more caches, so that performance could greatly be improved. STEMsalabim should now be about twice as fast as before.
- Results of the MPI processors are now written to temporary files and merged after each configuration is finished. This removes many MPI calls which tended to slow down the simulation. See the –tmp-dir parameter.
- Moved the Element, Atom, and Scattering classes to their own (isolated) library libatomic. This is easier to maintain.
- Simplified MPI communication by getting rid of serialization of C++ objects into char arrays. This is too error-prone anyway.
- Added compatibility with the Intel parallel studio (Compilers, MKL for FFTs, Intel MPI). Tested with Intel 17 only.
- Some minor fixes and improvements.
STEMsalabim 3.1.0, 3.1.1, 3.1.2, 3.1.3, 3.1.4¶
February 23nd, 2018
- Added GPL-3 License
- Moved all the code to Gitlab
- Moved documentation to readthedocs.org
- Added Gitlab CI
STEMsalabim 3.0.1 and 3.0.2¶
February 22nd, 2018
- Fixed a few bugs
- Improved the CMake files for better build process
January 3rd, 2018
- Reworked input/output file format.
- Reworked CBED storing. Blank areas due to bandwidth limiting are now removed.
- Changes to the configuration, mainly to defocus series.
- Compression can be switched on and off via config file now.
- Prepared the project for adding a Python API in the future.
- Added tapering to smoothen the atomic potential at the edges as explained in I. Lobato, et al, Ultramicroscopy 168, 17 (2016).
- Added analysis scripts for Python and MATLAB to the Si 001 example.
August 1st, 2017
- Changed Documentation generator to Sphinx
- Introduced a lot of memory management to prevent memory fragmentation bugs
- split STEMsalabim into a core library and binaries to ease creation of tools
- Added diagnostics output with –print-diagnostics
- Code cleanup and commenting
April 20th, 2017
- Added possibility to also save CBEDs, i.e., the kx/ky resolved intensities in reciprocal space.
- Improved documentation.
- Switched to NetCDF C API. Dependency on NetCDF C++ is dropped.
- Switched to distributed (parallel) writing of the NC files, which is required for the CBED feature. This requires NetCDF C and HDF5 to be compiled with MPI support.
March 27th, 2017
- Lots of code refactoring and cleanup
- Added Doxygen doc strings
- Added Markdown documentation and
make doctarget to build this website.
- Refined the output file structure
- Added HTTP reporting feature
fixed_slicingoption to fix each atom’s slice througout the simulation
- Got rid of the boost libraries to ease compilation and installation
November 18th, 2016
- Initial release.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00760.warc.gz
|
CC-MAIN-2022-33
| 5,465
| 73
|
https://paypdm.medium.com/holding-campaign-community-engagement-c1d8066f6504?source=post_internal_links---------6----------------------------
|
code
|
Holding Campaign — Community Engagement
Holding Campaign has finally come to an end.
The holding campaign which kicked off from Nov 30th to Dec 27th.
With reference to the guidelines laid out for the Holding Campaign, We took a step further in keeping under observation and review of all individuals participating on the holding campaign in line with the allocation of rewards to the winning holders.
With reference to the followings;
64% of Prospective Participants had a panic sell of atleast 1,000 PYD
23% of Prospective Participants had no purchase of atleast 1,000 PYD
the holdings campaign event.
8% of Prospective Participants are disqualified for using multiple wallets and other related
5% of Prospective Participants made a sell of atleast 1,000 PYD and the end date.
Other active campaigns
Upcoming social event
Airdrop Z Pre-Event
Telegram Community (t.me/paypdmofficial)
Telegram Announcement (t.me/paypdm)
Powered by CVETKO AG
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00016.warc.gz
|
CC-MAIN-2022-40
| 942
| 16
|
https://lists.jboss.org/archives/list/rules-dev@lists.jboss.org/message/2XIQNOEILND2H7UDTMGV5OK2F7VEJHZA/
|
code
|
Two general comments on the format of the RuleFlow (.rf) file from when I
was doing the NumberGuess documentation.
When I built workflow apps using jBPM I was able to edit the workflow XML
files by hand. Some people prefer this and it makes it easier should
somebody want to build an alternative editor for these files (e.g Web Based
instead of the Eclipse IDE). For jBPM I *had* to edit these manually , as
the IDE kept crashing.
The comment / complaint is that the Drools Ruleflow files are much harder to
edit. Yes, it's xml but the nodes do not seem to be in any particular order.
With jBPM , the nodes tend to be arranged in a flow from Start to End.
Talking of Web Based editors for flow, have you seen the Ajax / DHTML Editor
that Yahoo has for Pipes:
This shows that it is possible to have a Flow editor implemented in HTML.
Just my two thoughts.
On 10/1/07, Mark Proctor <mproctor(a)codehaus.org> wrote:
rules-dev mailing list
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100779.51/warc/CC-MAIN-20231208212357-20231209002357-00002.warc.gz
|
CC-MAIN-2023-50
| 935
| 16
|
https://satcom-services.com/upconverter-0-95-1-45ghz-12-25-12-75ghz/
|
code
|
2009-122 Upconverter - Converts a 950 - 1450 MHz signal to 12.25 - 12.75 GHz with a low side local oscillator (LO) (noninverted spectrum) for loop-back applications. Featuring low phase noise, this unit is used to upconvert 950 - 1450 MHz signals to 12.25 - 12.75 GHz for test purposes. The 950 - 1450 MHz input is mixed with a synthesized local oscillator (LO) signal to 12.25 - 12.75 GHz. The mixer output is applied to the output attenuator providing a nominal gain of -30 dB. Connectors are 75 ohm F female for the 950 - 1450 MHz input and 50 ohm SMA, female for the RF output. Front panel LEDs light when DC power is applied (green) and when a PLL alarm occurs (red). DC power is provided by an external 115 VAC, 60 Hz wall mount power supply. A 90-240 VAC, 47-63 Hz wall mount power supply (option -P4) is available. The 2009 can be mounted on an 1 3/4” X 19” rack mount panel (option -R).
(View Our Datasheet)
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00609.warc.gz
|
CC-MAIN-2022-40
| 920
| 2
|
http://www.newgrounds.com/portal/view/601778?footer_feature=movies
|
code
|
God punishes Egypt with 10 wacky plagues!4.42 / 5.00 10,560 Views
You wanna see it? You wanna see my pecker?4.42 / 5.00 13,937 Views
Why everyone hated the kid that lived across the street4.27 / 5.00 5,066 Views
This board is currently empty.
~~~WASD to move~~~
~~~Arrow Keys to shoot (when you evolve enough)~~~
My entry for Ludum Dare 24, a 48 hour game making competition. The theme was evolution. Play as a slime that evolves the more it eats, and use you evolution to defeat your enemies. Gain friends, defeat bats and alligators. Thanks for the feature. Link to Ludum Dare page for the game: http://www.ludumdare.com/
Made in FlashPunk.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894151.32/warc/CC-MAIN-20140722025814-00183-ip-10-33-131-23.ec2.internal.warc.gz
|
CC-MAIN-2014-23
| 642
| 8
|
http://jessownlife.blogspot.com/2016/07/week-7-in-y2s2.html
|
code
|
Recently my life is not easy. I have been thinking that am I stress up myself or what... Hmm, I guess it would be a question for me... But anyhow I am still alive with a ton of assignment, midterm and presentation which by my side right now.. ~.~ We join another Club this semester as committee members. I will be handling an event. Hope I can make it a real thing to gain experience it's a real platform for me to perform. With this experience I hope can get a better job by knowing what to do with my future. I thought that I going have a new post on next semester but I am too excited with the coming events. I got plenty of friends but when I tell them seems that I am showing off while nothing that seems solid. So what can I do is just realise on my personal space.
Hope that everything will be okay this semester NO MORE MEDICAL CASE!!!
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945669.54/warc/CC-MAIN-20180423011954-20180423031954-00460.warc.gz
|
CC-MAIN-2018-17
| 843
| 2
|
https://licensemysoftware.net/2021/12/02/enfocus-switch-2021-fall-release/
|
code
|
The Switch platform gets an update – December 2nd, 2021
Switch 2021 Fall is a platform update release. With this update comes improved performance, Submit Point UX updates, and scripting additions. We’ve also added support for the latest operating systems.
This update concentrates on improving the platform.
Native support for Apple Silicon M1 chip
Support for macOS 12 Monterey
Performance improvements on Mac and Windows for Node.js script-based elements
Improvements with file moving operations
Job submission UX improvements
Refresh job element
Job name validation at the Submit point
Scripting improvements and additions
Support for Node.js version 16
Execute command stdout and stderr stored as private data.
Node.js webhook support
Node.js ImageDocument class
flowStartTriggered and flowStopTriggered script entry points
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473360.9/warc/CC-MAIN-20240221002544-20240221032544-00628.warc.gz
|
CC-MAIN-2024-10
| 832
| 16
|
https://github.com/kubernetes/kubernetes/issues/68677
|
code
|
Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
CSR API doesn't allow "NEW CERTIFICATE REQUEST" PEM blocks #68677
Is this a BUG REPORT or FEATURE REQUEST?:
when submitting the below CSR to k8s for signing (generated by the Java keytool) :
k8s rejects the request with the error message:
What you expected to happen:
This is a valid PEM encoded CSR, so the request should be submitted. @liggitt noticed that the keytool generated CSR has the word
How to reproduce it (as minimally and precisely as possible):
Submit the following CSR:
Anything else we need to know?:
@mlbiam: There are no sig labels on this issue. Please add a sig label by either:
Note: Method 1 will trigger an email to the group. See the group list.
That header isn't actually required by the standard for this (c.f.
if "NEW CERTIFICATE REQUEST" is a known alternate representation (openssl indicates it is the old PEM header for CSRs), I don't mind allowing it.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158633.40/warc/CC-MAIN-20180922182020-20180922202420-00168.warc.gz
|
CC-MAIN-2018-39
| 1,041
| 15
|
https://evsoup.com/solar-power-capital-of-the-world/
|
code
|
From Fully Charged Show.
With a staggering 1 in 3 homes kitted out with solar, rooftop panels are now the biggest generator of electricity in Australia. Robert went to visit a home in Queensland decked out with the latest kit from Enphase and found out how Rea Solar are helping homeowners to switch to renewable energy.
Since 2006 Enphase have installed more than 52 million microinverters on more than 2.7 million homes in over 145 countries, helping millions of people gain access to clean, affordable, and reliable energy.
Get your tickets for Fully Charged LIVE in Sydney this March: https://au.fullycharged.live/
01:20 So much solar!
02:10 Inverters on every panel
03:20 1 in 3 houses
05:30 Flexible Design
06:54 When do you use energy?
07:53 How much solar do you need?
10:17 Concluding thoughts
Visit our LIVE exhibitions in Australia, UK, USA, Canada & Europe: https://fullycharged.live/
Become a Patreon: https://www.patreon.com/fullychargedshow
Become a YouTube member: use JOIN button above Subscribe to Fully Charged & the Everything Electric channels
Subscribe for episode alerts and the Fully Charged newsletter: https://fullycharged.show/zap-sign-up/
Find us on Twitter: https://twitter.com/fullychargedshwFollow us on Instagram: https://instagram.com/fullychargedshow
#evs #cleanenergy #electriccar #solar #solarpower #australia #technology #renewableenergy #battery #batterystorage #home
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100972.58/warc/CC-MAIN-20231209202131-20231209232131-00218.warc.gz
|
CC-MAIN-2023-50
| 1,405
| 17
|
http://streetlucas.xyz/archives/5010
|
code
|
Jellynovel I’m Secretly Married to a Big Shot txt – Chapter 2279 – : This Is Not Something You Can Control exist therapeutic suggest-p2
Novel–I’m Secretly Married to a Big Shot–I’m Secretly Married to a Big Shot
Chapter 2279 – : This Is Not Something You Can Control judge heavenly
Just then, there is another high in volume knock over the door. “Young Madam, do you notice me? When you never show up, we’re gonna break the entrance.”
jack and mr grinch costume
Ancient Madam was in her section very, so Madam Mo couldn’t insist upon her proceeding in another country to recuperate.
The Hunters – I’ll Be Hunting You
This is Mo s.h.i.+xiu’s typical business, plus some personal doc.u.ments ended up located there.
The doorway was slammed.
But she had already picked up someone to generate a little incident on his in the past.
This is Mo s.h.i.+xiu’s regular office, and a few private doc.u.ments have been kept there.
She widened her vision. “Madam, y-you’re about to acquire Young Madam absent? Exactly where do you find yourself having her?”
She got lots of time to get Jiang Luoli absent.
She would never have the ability to bewitch her son all over again.
But she still believed vulnerable.
Jiang Luoli realized she acquired locked the threshold from your inside, therefore, the men and women outside wouldn’t manage to type in in the near future.
An individual knocked in the doorstep upstairs.
After locking the doorway, she went towards the windowpane and shut it.
She read the bodyguard outside say, “Young Madam, you would more effective show up oneself. Do not force us make use of violence.”
rudyard kipling kim
Hence, Madam Mo didn’t think that her child possessed cast aside willingly.
The entranceway was slammed.
On condition that she got rid of Jiang Luoli, every little thing will be back to normal.
But she still noticed insecure.
As a result, Madam Mo didn’t are convinced that her son had cast aside willingly.
“What’s that audio?” Mo s.h.i.+xiu’s speech turned out to be critical. “Luoli, in which do you find yourself now? What happened?”
An individual knocked in the doorway upstairs.
Her fingers trembled and her cellphone almost declined to the ground.
She took out her cell phone and named Mo s.h.i.+xiu.
Madam Mo planned to obtain an excuse to receive Jiang Luoli in another country, just before she could imagine a perfect reason, she determined that Mo s.h.i.+xiu had withdrawn through the selection.
“N-Nothing at all..” Jiang Luoli took an in-depth breath and went towards window. “Mo s.h.i.+xiu, in which are you? The time until you return home?”
training to be a perfect lover
With this idea, Madam Mo smiled and felt a lot better.
Buttercup Gold And Other Stories
Mo s.h.i.+xiu obtained ready for this political election for quite a while.
A Publisher and His Friends
Jiang Luoli believed she got secured the doorway from the in, so that the men and women outside wouldn’t be capable to key in anytime soon.
She had taken out her cellular telephone and called Mo s.h.i.+xiu.
bride of the sea review
Jiang Luoli believed she had locked the door in the inside of, hence the people outside wouldn’t be capable of enter in anytime soon.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00108.warc.gz
|
CC-MAIN-2022-40
| 3,239
| 40
|
https://ephyslab.uvigo.es/tramo/ccrs/identification-hot-areas-region-more-influenced-future-climate-changes-evaporation/
|
code
|
Identification of hot areas in the region more influenced by future climate changes in evaporation minus precipitation
Some oceanic areas will in future show greater E-P, according to the climate models. Those regions with an increase greater than 0.3 mm/day were defined by Seager and Vecchi as hot spot source regions (HSSRs). Its characterization was made by comparing the periods 2046-2065 and 1961-2000 for boreal winter and boreal summer as predicted by the 15 GCMs used in the AR4 assessment.
We select only those HSSRs that affect each target region to detect the potentially vulnerable continental areas that receive moisture from these HSSRs using a Lagrangian method following Gimeno et al. .
Approach: Forward tracking from hot spot sources regions
Temporal scale: Semiannual periods (ONDJFM and AMJJAS)
Patterns: Areas with (E-P)1-10<-0.01mm/day over target region
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573759.32/warc/CC-MAIN-20190919224954-20190920010954-00551.warc.gz
|
CC-MAIN-2019-39
| 878
| 6
|
https://mass-analytica.com/products/massmetasite/information/
|
code
|
Peak Finding, Structure Elucidation and SoM Prediction.
MassMetaSite, first translates the data into a common format standardizing all different types of data input formats. Secondly, the system performs an automatic peak finding using several proprietary algorithms that consider the signal/noise analysis, blank comparison, compound fragmentation, isotope filtering, mass defect analysis and chromatogram front and tail analysis. After all these algorithms are applied several chromatographic peaks related to the compound of interest are obtained. These peaks are characterized by their retention time range, m/z observed and the MS2 or secondary spectra. In parallel the system computes the structure of all the metabolites that can be formed applying the chemical rules of the metabolic transformations generation a pool of potential structures that are characterized by the structure, their calculated m/z and their potential fragmentation. The system in the third step assigns the structures to the chromatographic peaks found and assigning a color to each peak:
- Green: This peak color is obtained when the observed m/z can be reached by the metabolite structure calculated m/z computed using a single generation of the metabolic transformation.
- Brown: This peak color is obtained when the observed m/z can be reached by the metabolite structure calculated m/z computed using two or more generations of the metabolic transformation, i.e., metabolite of metabolite.
- Light blue: This peak color is obtained when the observed m/z can be reached only by computing the selected adducts, dimers or neutral losses to the first or more generation of the metabolite or the parent.
- Pink: This color is obtained if the peak is only detected in the UV chromatogram and no peak can be assigned using in the MS spectra the UV parameters.
- Orange: This color is obtained if the peak is only detected in the Fluorescence chromatogram and no peak can be assigned in the MS spectra using the Fluorescence parameters.
- Black: This color is obtained if the peak is only detected in the Radio chromatogram and no peak can be assigned in the MS spectra using the Radio parameters.
In the fourth step, each computed structure that is assigned to each peak will be scored in agreement with the fragmentation analysis. This process starts with the fragment assignation made for the parent by comparing the theoretical fragments obtained by breaking and reorganizing the bonds in the parent structure and the observed MS and MS2 m/z signals from the spectra.
These assigned fragments for the parent are then compared with the potential fragmentation of the metabolite structure using the same breaking rules used for the parent. In this case the fragments found in the metabolites that are shift or non-shifted compared to the parent ones, or the fragments that are only found in the analysis of the peak spectra and the structure of the metabolite will count as positive in the scoring. The structural fragments of the metabolites that are not compatible with the m/z observed will count as negative in this scoring system. The solutions that have the same scoring will be grouped in a Markush representation.
The final step for the process is the computation of the Site of Metabolism score (SoM), each metabolite structure is associated to the modification of a set of atoms in the parent one, and each atom of the parent has a MetaSite scoring procedure that it is used to rank the solutions that have the same Mass Spec score.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474595.59/warc/CC-MAIN-20240225103506-20240225133506-00112.warc.gz
|
CC-MAIN-2024-10
| 3,522
| 11
|
https://www.limsforum.com/what-is-this-sensor-and-does-this-app-need-access-to-it/87831/
|
code
|
What is this sensor and does this app need access to it?
We use our mobile phones daily, and many of don’t give consideration to whether or not those devices are tracking or monitoring our activities. At the root of this cybersecurity issue is, most frequently, the permissions given to one or more applications on the device to access one or more sensors contained in the device. The lackadaisical attitude of the average user towards the cybersecurity of their mobile device can be attributed to a variety of aspects, including poor education regarding smartphone use, low public awareness, and ignorance due to developers’ stealthy or “permission hungry” methodologies. Mehrnezhad and Toreini discuss these issues and more at length in this 2019 paper published in Informatics, concluding that while “teaching about general aspects of sensors might not immediately improve people’s ability to perceive the risks,” over time users may “successfully identify over-privileged apps” and make more informed decisions about “modifying the app permissions, uninstalling, or keeping it as-is.”
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735833.83/warc/CC-MAIN-20200803195435-20200803225435-00428.warc.gz
|
CC-MAIN-2020-34
| 1,110
| 2
|
https://www.vectorvault.com/tag/pin-up/
|
code
|
Imagine what things would be like without digital art? Take a look at the first human likeness to ever be displayed on a computer screen.
Identifying anchorpoints with a light pen and then connecting those mathematical locations was the start of something big.
It was the seed, for a shift in the way that human beings visualize and express themselves. In fact, Vectorvault exists because of it.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100575.30/warc/CC-MAIN-20231206000253-20231206030253-00707.warc.gz
|
CC-MAIN-2023-50
| 395
| 3
|
https://tools.sedris.org/seeit_4.1.0.0.htm
|
code
|
|Quick Scroll To:
SEE-IT is a powerful tool that provides two primary utilities to environmental database users and developers. It checks for conditions that may be inaccurate descriptions of the physical environment they are intended to model. It also evaluates environmental databases to find conditions that can lead to anomalous behaviors by entities operating in the simulated world. In addition, SEE-IT provides data query and filtering mechanisms to highlight, detect, and diagnose environmental data.
The current version is 2.2. This tool is SEDRIS 4.1.x compliant.
There is no requirement that SEDRIS be installed on the host system. All of the required API functions have been statically compiled with the SEE-IT software so that it can execute in a stand-alone mode.
SEE-IT checks for anomalies such as cracks in the terrain, if the roads meet at proper junctions, sliver polygons, intersections of roads, rivers and bridges, and a variety of other anomalies that are usually found in terrain databases. Content checking of non-terrain data sets are also planned for the future.
Data SEE-IT keeps includes:
Data SEE-IT ignores includes:
Also provided as a download is the Condition Report API. This API may be used to develop applications that read SEE-IT condition reports. A sample condition report is provided in the API download. Source code for a sample application, "read_conditions.c", is also provided that exercises all of the API functions.
See Planned Improvements.
POINT OF CONTACT
Send email to email@example.com for questions or assistance in using this application.
As a minimum, please provide the following in your email:
NOTE: All of the platform-specific packages above include
NOTE: The X Server package above is an updated
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655446.86/warc/CC-MAIN-20230609064417-20230609094417-00609.warc.gz
|
CC-MAIN-2023-23
| 1,754
| 14
|
https://www.analyticssteps.com/blogs/what-adaboost-algorithm-ensemble-learning
|
code
|
There is an old story where a father gives a pile of 4 wooden sticks and ask each of his sons to break that pile of sticks, everyone fails to do so, afterwards, the father gave every individual, one wooden stick from that pile and ask them to break it now, his sons were able to break it now. In this story, we learned that, individually one might be weak, but when we combine or whenever we form unity, we can become strong. This is what boosting is all about.
Boosting combines all the weak learners (the parameters that could not classify the problem properly) and after the combination, the majority vote is taken to classify which category the input falls in.
(Suggested blog: Machine learning algorithms)
Let’s consider an example, suppose we need to classify whether the given image is of a horse or a donkey, there could be various factors on which we could determine like height, width, long tail, and more. The problem is none of these factors can tell perfectly that the given image is of a donkey or a horse, therefore we will consider all these factors and do the majority voting, in short, making all the weak learners combine to form a strong predictor.
Above are the few weak learners by which we got some output, now if we do the majority voting, we will find that most of the weak learners tell that the input might be a horse. This is the concept of boosting.
What is ensemble learning?
Ensemble learning is used to boost up the machine learning model’s accuracy and efficiency, to enhance the accuracy, ensemble learning takes the decisions from various models and combines them in a few ways to get the best decision. These few ways are max voting that we discussed earlier or by taking the average.
The average method is easy to implement, considering you have three prediction scores p1, p2, and p3 from different models like logistic regression, decision tree, and K-nearest neighbour (KNN). Now in order to take the average, all we need to do is-:
p1 + p2 + p3 / 3
Some of the advanced techniques under ensembled learning are bagging and boosting. Let’s discuss both of them briefly-:
The term bagging here represents that the original dataset is distributed in several parts, where each part acts as a dataset for an individual model. The main dataset is divided into equal parts, to make the sub dataset size same as the original, we add some replacement so that we would have enough features to learn something from the dataset. This process is known as bootstrapping.
Bagging in Ensembled Learning
Above image is the perfect representation of bagging, here the original dataset is distributed into three subset, each sub dataset is fed to an model, most of the time it is decision tree, after that each model gives some predictions and at the end we combine all the predictions and with the help of max voting or averaging, we get the strong predictions, all the weak learners here, combines to form a strong learner and helps in increasing the accuracy of the model.
(Also read: What is LightGBM Algorithm?)
Boosting is a method which adds an extra layer of perfection to the model, you are now aware that in bagging each sub dataset goes through a model to predict an output, boosting comes into the picture to reduce the errors from subsequent models. There are a few steps involved in the working of boosting, let’s discuss them-:
The base algorithm assigns weights to the data points of the sub datasets in order to find the errors.
Errors are calculated using the difference between predicted values and actual values.
The data points which showed the error gets their weights updated, all these data points are assigned higher weights.
Another model is trained with the updated weights and this model tends to perform better than the previous one.
This way every consecutive model learns from the previous one and gives the better result, at the end the mean of all the outputs are taken to form an optimum outcome.
Above representation shows the combination of all the weak learners with their updated data points, and at the end we can see the generalized outcome from the combination of these weak learners. This concept also shows that a model may not perform well on the whole dataset, but can give better results when trained over a portion of the whole dataset.
Some of the boosting algorithms are-:
Let’s implement the AdaBoost algorithm, the principle remains the same as boosting in ensemble learning.
AdaBoost Algorithm Python Implementation Using Sklearn
Our first step here is to pre-process the dataset and split the dataset for training and testing part.
Step1: import necessary libraries.
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
We have imported pandas in order to read the dataset, numpy, and matplotlob python libraries.
ds = pd.read_csv('../datasets/titanic.csv')
Reading the dataset and printing a few features-:
This is how the dataset looks like.
In our next step, we are going to assign a numerical value to the sex of people, for males, we are going to assign ‘0’ and ‘1’ for the later.
df = ds[['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Survived']]
ds['Sex'] = ds['Sex'].apply(quantify_sex)
In our next step we are going to assign a variable ‘X’ to the people who did not survived and ‘y’ to the one that survived.
X = ds[[each for each in ds.columns if each != "Survived"]]
y = pd.DataFrame(ds['Survived'], columns=['Survived'])
Importing train_test_split from sklearn to split the dataset.
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
Now, import AdaBoostClassifier from the sklearn and fit the dataset.
from sklearn.ensemble import AdaBoostClassifier
ac = AdaBoostClassifier(random_state=90)
Now in the next step we are training our model, as we can see with the help of sklearn and the adaboost classifier, the process has become really easy.
Now, its time to calculate the accuracy or score of our adaboost classifier.
We can see on the training dataset, the accuracy is about 85%. Now we shall calculate the accuracy score for testing the dataset.
While calculating the score for the testing dataset, we got the accuracy of 83 percent overall.
Adaboost algorithm is an exceptional method to boost up the performance of a machine learning model by combining all the weak learners together to form a strong predictor.
(Must read: Machine Learning Tools)
However, if the weak learners are in fact a lot weak, it may lead to the overfitting and if we dig in some more, we will find that boosting is very difficult to scale. By keeping a few limitations in mind, if the goal is to increase the productivity of the model, we must use this algorithm.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510368.33/warc/CC-MAIN-20230928063033-20230928093033-00199.warc.gz
|
CC-MAIN-2023-40
| 6,758
| 52
|
https://jobs.amewaregroup.com/jobs/detail/senior-fullstack-developer-java-javascript-128
|
code
|
When you choose to work with Ameware Group, you work with industry experts who are willing to share their knowledge and experience. You will be able to broaden your horizons by working in an open and supportive multicultural environment. We have a great relaxed atmosphere and a competitive salary.
Our client is a well-known US sales data and intelligence platform provider. They’re looking for a Senior FullStack Developer. In this position, you will be in charge of overseeing the flow of data between servers and users, as well as managing communication between various data systems within our backend infrastructure, and you will be involved in creating and implementing server-side logic, defining and maintaining the central database, and ensuring optimal performance and responsiveness to requests from the front-end.
Also as Senior FullStack Developer, you will be involved in designing and implementing backend systems for handling large volumes of data. These systems are primarily dedicated to facilitating the movement of data among our various services.
- 5+ years of experience with Java.
- Proficient understanding of Core Java.
- Capability to code in an alternative programming language, such as Python or Node.js.
- Proficiency in crafting and developing diverse web and enterprise-level applications using Java/JEE technologies, including Spring and Hibernate.
- Expertise in NoSQL / MongoDB, DynamoDB, and Redis.
- Strong experience with tools: Maven, Github, and Swagger.
- Solid understanding of SQL and ElasticSearch.
- Hands-on experience in designing interactive applications.
- Proficiency in building web applications using a well-known web framework(JSF, Wicket, GWT, or Spring MVC).
- Proficient understanding of Object-Relational Mapping (ORM) technologies such as JPA2 and Hibernate.
- Familiarity with the practice of test-driven development (TDD).
- Strong experience in AWS Infrastructure and services.
- Capability to create documentation for requirements and specifications.
- Bachelor's degree in Computer Science or a related technical major, or equivalent experience.
- Good oral and written English is a must.
- Conduct a complete software development lifecycle (SDLC) and create and implement new features.
- Enhance the application for optimal speed and scalability.
- Incorporate software elements into a fully operational software system.
- Create plans for software verification and establish procedures for quality assurance.
- Control that the software is regularly updated with the latest features.
- Enforce measures for securing and safeguarding data.
- Create plans for software validation and establish procedures for ensuring quality.
- Remote work in an international company with HQ in the US.
- Competitive salary in USD.
- Flexible working hours to help you manage your work/life balance.
- Career and professional growth.
- Warm and friendly attitude to every specialist.
- Time-off arrangement with compensation and benefits package.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817780.88/warc/CC-MAIN-20240421132819-20240421162819-00366.warc.gz
|
CC-MAIN-2024-18
| 2,995
| 31
|
http://ux.stackexchange.com/questions/tagged/messagebox+winforms
|
code
|
User Experience Meta
to customize your list.
more stack exchange communities
Start here for a quick overview of the site
Detailed answers to any questions you might have
Discuss the workings and policies of this site
What properly goes into the “caption” of a message box?
By convention, what is the best practice for the text that goes into the "caption" field of a text box? The "caption" field is the text that shows up in the title bar: I have experimented with ...
Nov 8 '11 at 21:38
newest messagebox winforms questions feed
Hot Network Questions
How can I deal with the challenges a player having Mind-Reading represents without negating their ability?
Counting characters in a text file
Congruences and prime powers
Is it necessary that every function is a derivative of some function?
Am I morally obligated to pursue a career in medicine?
Public constructor of a private class
Writing better JUnit tests
Get old search style back in Nautilus
Find the particular solution of the equation that satisfies condition
Thrust vectoring on commercial jets
tcp.length and tcp.data wireshark filters
How do I make a TikZ matrix match height?
What to do when a player dislikes his character, that is vital for the story?
How can I use nested to replace a if-then-fi ? ("[: -f: binary operator expected")
Why don't I have to install programs?
A Problem of Combinatorics
Creating variable of type <base class> to store <derived class> object in C#
Identify Java Callback
How can I prove that there is no set containing itself without using axiom of foundation?
How to select the first "lowest" (first valley) number in a list?
How do I deal with a really wet, messy dough?
Dash in Environment Name
What is the differences between mysql-client and mysql-client-core?
Are Master's becoming required for USA PhD programs?
more hot questions
Life / Arts
Culture / Recreation
TeX - LaTeX
Unix & Linux
Ask Different (Apple)
Geographic Information Systems
Science Fiction & Fantasy
Seasoned Advice (cooking)
Personal Finance & Money
English Language & Usage
Mi Yodeya (Judaism)
Cross Validated (stats)
Theoretical Computer Science
Meta Stack Exchange
Stack Overflow Careers
site design / logo © 2014 stack exchange inc; user contributions licensed under
cc by-sa 3.0
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00532-ip-10-147-4-33.ec2.internal.warc.gz
|
CC-MAIN-2014-15
| 2,262
| 53
|
http://azure.efytimes.com/?p=142
|
code
|
“…With SUMMIT, we have ensured that our customers can take advantage of cloud computing to enhance their IT Services to its employees, while reducing the costs they would otherwise incur to purchase, manage, and maintain an expanded on-premises IT infrastructure.”
S. Vijayashanker, Vice President, Symphony Services
Headquartered in Palo Alto, California, Symphony Services is a global software product engineering outsourcing services company. It works with the world’s leading technology product companies, helping them evolve and expand their business and significantly bolster competitive advantage. As a market leader, Symphony wanted to help its customers obtain the benefits of cloud computing. It used the Windows Azure platform to develop SUMMIT, an Enterprise Service and Systems Management product that enables customers to deploy, manage, and monitor applications on the cloud. As a result, customers avoid costly capital expenditures along with the hassle of maintaining an on-premises infrastructure. Symphony provided its customers with vast computing power, and enhanced business agility while maintaining full control of their data and security processes.
Country or Region: India
Industry: IT services
Founded in 2002, Symphony Services is a software product engineering outsourcing services company headquartered in Palo Alto, California.
Symphony wanted to deliver the performance, scalability, and flexibility of cloud computing to its customers, while reducing development, maintenance, and infrastructure costs.
Symphony used the Windows Azure platform to develop SUMMIT, a comprehensive enterprise IT management suite, which allows organizations to improve IT governance, optimize service levels, and reduce related costs.
Avoids capital expenses
Provides superior scalability
Symphony Services is a leading global specialist, providing software product engineering outsourcing services to Independent Software Vendors (ISVs), software enabled businesses and companies whose products contain embedded software. These companies partner with Symphony Services to achieve their business goals, by relying on its commitment to drive real business results and its proven ability to deliver high-quality services and support throughout the product lifecycle.
Symphony Services is headquartered in Palo Alto, California, and has major Global Operations Centers in the US, India and China. Utilizing a multi-shore delivery approach, the company maximizes quality and efficiency, while minimizing costs for its clients. The company’s 3,500+ employees globally support over 1,200 new software releases annually, helping clients drive unparalleled innovation while bringing predictability to costs, schedules and quality of the outsourced engineering process.
Though the solutions are robust and offer tangible benefits, the infrastructure required to run them can be cost-prohibitive for many client businesses. The capital expenditures associated with purchasing hardware, renting server space for application servers and database servers, and hiring IT personnel to manage and maintain those servers are challenges that further compound expenses and can be daunting for businesses.
To help its clients operate more efficiently, Symphony wanted to offer a solution that would help reduce the capital costs involved to procure server hardware and software, manage data centers, and hire additional personnel to maintain the infrastructure. In an effort to give businesses a way to cost-effectively use its IT management software without the need to invest in expensive infrastructure, Symphony decided that it would develop a line of software-as-a-service solutions delivered through the cloud that is, the applications and customer data would be hosted in data centers and delivered over the Internet.
To deliver the performance and flexibility of cloud computing to its customers, Symphony developed an ITSM suite called SUMMIT in year. Built on the Windows Azure™ platform, an Internet-scale cloud services platform hosted in Microsoft data centers, it provides an operating system, data storage, and a set of developer services for creating a range of flexible, cost-effective solutions.
“Windows Azure provided a compelling option,” says S. Vijayashanker, Vice President, Symphony Services. “It is an optimal cloud computing solution that includes more services than any other offerings, not to mention hands-off, low-cost maintenance.”
SUMMIT is a comprehensive enterprise IT management suite that allows organizations of all industry sectors and sizes to improve IT governance, optimize service levels, increase productivity and reduce related costs, supporting automation, thus providing maximum benefits with minimum resource requirements. An ITIL V3 certified Enterprise Service and Systems Management product enables you to manage enterprise IT environments (on premises or on Software-as-a-Service (SAAS) environment). It’s built according to ITIL (Information Technology Infrastructure Library) best practices that help enterprises establish a standardized set of processes for better IT management.
The company is primarily offering services to Independent Software Vendors (ISVs) it works with, helping enterprises virtualize its applications on the Microsoft Azure platform. “Cloud is a path breaking innovation,” says Vijayashanker. “We are helping our enterprise customers to evaluate the same and stay away from the burden of capital expenditure.”
SUMMIT supports the following IT processes:
Service Level Management
Availability Management (server, application, network monitoring)
Software Delivery Management
By using SUMMIT, customers retain full control and exclusive use of their data and its security, privacy, and compliance processes. Powering SUMMIT with Windows Azure gives customers the elasticity, scalability, and agility along with the cost benefits of cloud computing.
By using Windows Azure to implement SUMMIT, Symphony is helping to accelerate the adoption of cloud computing among its enterprise customers. The solution provides enterprises with vast computing power, enhanced business flexibility, full control of its important data and processes, and the capabilities it needs to meet a growing range of challenges.
Avoids Capital Expenses
“From a business perspective, the capital cost to build an infrastructure to host an on-premises model was outrageously expensive for our customers,” says Vasudev Nayak, Director, Symphony Services. “With SUMMIT, we have ensured that our customers can take advantage of cloud computing to enhance their IT capacity, while reducing the costs they would otherwise incur to purchase, manage, and maintain an expanded on-premises IT infrastructure.”
With Windows Azure, customers can avoid making costly investments in IT infrastructure. The platform provides a low cost option for hosting business services. Because Windows Azure is hosted in Microsoft data centers, enterprise customers who use the platform don’t have to pay any upfront costs instead they only have to pay for the compute and storage costs that they use.
By using Windows Azure to build SUMMIT, Symphony is helping customers gain operational visibility and granular application control. As a result, customers are assured of operational excellence and optimized performance. “As the solution is hosted in Microsoft data centers, our customers are saving significant support time and costs that they would have spent otherwise for monitoring, and managing the IT infrastructure,” says Vasudev.
Provides Superior Scalability
In addition to cutting capital expenses, customers can also scale up and down with Windows Azure. “With SUMMIT, our customers do not need to worry about configuring and deploying new physical servers to meet their increasing computing needs,” says Vijayashanker. “They can quickly scale up when they need extra compute and storage resources, as well as scale down when only fewer resources are needed. This way they do not worry about paying for servers that are not in use.”
Windows Azure Platform
The Windows Azure platform provides an excellent foundation for expanding online product and service offerings. The main components include:
Windows Azure. Windows Azure is the development, service hosting, and service management environment for the Windows Azure platform. Windows Azure provides developers with on-demand compute and storage to host, scale, and manage Web applications on the Internet through Microsoft data centers. In addition, Windows Azure serves developers’ connectivity needs through the following services.
Microsoft Azure Data Market. DataMarket section of Windows Azure Marketplace, formerly known as Codename Dallas, includes data, imagery, and real-time web services from leading commercial data providers and authoritative public data sources.
Windows Azure platform AppFabric. With Windows Azure platform AppFabric, developers can build and manage applications more easily both on-premises and in the cloud.
The AppFabric Service Bus connects services and applications across network boundaries to help developers build distributed applications.
The AppFabric Access Control. Provides federated, claims-based access control for REST Web services.
Microsoft SQL Azure
Microsoft SQL Azure offers the first cloud-based relational and self-managed database service built on Microsoft SQL Server 2008 technologies.
To learn more about the Windows Azure platform, visit:
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703507971.27/warc/CC-MAIN-20210116225820-20210117015820-00545.warc.gz
|
CC-MAIN-2021-04
| 9,533
| 40
|
https://poststatus.com/emoji-support-wordpress-happening/
|
code
|
Emoji. How does this not already exist already in WordPress? I mean, it’s taken over our textual expressions everywhere else.
Well, Gary Pendergast and team Emoji have you covered. It turns out your Emoji obsession isn’t that easy to support, so there’s a feature plugin for that, and it’s slated for 4.2. Office hours available if you want to get your Emoji nerd on.
I’ve enjoyed the core Slack chats around this one.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100745.32/warc/CC-MAIN-20231208112926-20231208142926-00326.warc.gz
|
CC-MAIN-2023-50
| 428
| 3
|
http://mathbio.nimr.mrc.ac.uk/wiki/POPS
|
code
|
Server Version 1.0.6; POPS* Version 1.5.3
POPS* is a fast algorithm to calculate solvent accessible surface areas (SASAs) of proteins and nucleic acids at atomic (default) and residue (coarse-grained) level. Atomic and residue area parameters have been optimised versus an accurate all-atom method (Naccess). Residue areas (coarse-grained) are simulated with a single sphere centered at the C-alpha atom for amino acids and at the P atom for nucleotides.
An analytical formula is used for the calculation of solvent accessibilities. The formula is simple, easily derivable and fast to compute, therefore, practical for use in molecular dynamics (MD) simulations as an approximation to the first solvation shell. The default output contains: list of hydrophobic, hydrophilic and total contributions to the accessible surface area of the entire molecule (all atoms), first model only.
After submission a holding page provides access to the result page. Otherwise results (also of previous runs) can be accessed through the Retrieve function.
The POPS* program code can be downloaded
Users publishing results obtained with the POPS* program should acknowledge its use by citing Fraternali and Cavallo (2002) and for the POPS* server Cavallo et al. (2003).
The optimised parameters are listed here:
- PDB structure file
- Entering a PDB identifier (lower case or upper case without the extension .pdb) in the first text window will automatically perform the POPS* calculation on the corresponding structure. Uploading of structure files by the user can be performed by specifying the file path and name in the second text window. Please ensure that atom and residue names conform to the PDB format.
- Calculation of accessible surface area on residue level.
- probe radius
- The radius of the surface probe (solvent molecule) in Angstrom.
- multiple models (option currently disabled)
- By default the POPS* calculation will be performed on the first model only, if a structure file containing multiple models is chosen/uploaded. Activation of the 'Multiple models' button triggers POPS* calculation of all models.
- molecule composition
- Number of chains, standard residues, total residues and atoms.
- atom types
- List of atom types and their POPS* parameters.
- molecule topology
- Number of bonds, angles, torsions and non-bonded interactions.
- POPS* area per atom
- List of atom areas [A2], number of overlaps and atom grouping (1: positive, 2: negative, 3: polar, 4: aromatic, 5: aliphatic).
- POPS* area per residue
- List of residue areas [A2] (hydrophilic, hydrophobic, total) and number of overlaps.
- POPS* area per chain
- List of chain areas [A2] (hydrophilic, hydrophobic, total).
- neighbour list
- List of neighbours per atom, provided in a separate output file.
The area equation is defined by a product Π of terms that estimate
the reduction of SASA of atom i by the overlap with its neighbours j (Hasel at al., 1988):
Πi=1N [ 1 - (pi pij bij(rij) / Si) ].
i is the atom for which the POPS* area is computed, j is any of N neighbour atoms.
pi is an atom type specific SASA parameter.
pij is a sphere overlap parameter depending on the degree of bonding between i and j (1-2, 1-3, 1-4, 1-(>4)).
bij is a geometric construct based on the radii and distance (rij) of i and j.
Si is the SASA of the free atom i (no neighbours).
The atom specific parameters (radii, SASAs) are listed in the Parameters files for the atoms of all standard PDB residues, followed by the connectivity parameters (pij, bij) and the solvent radius for water.
Fraternali, F. and van Gunsteren, W.F.
An efficient mean solvation force model for use in molecular dynamics simulations of proteins in aqueous solution.
Journal of Molecular Biology 256 (1996) 939-948. [PubMed Abstract]
Fraternali, F. and Cavallo, L.
Parameter optimized surfaces (POPS*): analysis of key interactions and conformational changes in the ribosome.
Nucleic Acids Research 30 (2002) 2950-2960. [PDF]
Cavallo, L., Kleinjung, J. and Fraternali, F.
POPS: A fast algorithm for solvent accessible surface areas at atomic and residue level.
Nucleic Acids Research 31 (2003) 3364-3366. [PDF]
Kleinjung, J. and Fraternali, F.
POPSCOMP: an automated interaction analysis of biomolecular complexes.
Nucleic Acids Research 33 (2005) W342-W346. [PDF]
The POPS server was developed by Franca Fraternali, the POPS logo was designed by Domenico Fraternali. Contact Authors
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696384181/warc/CC-MAIN-20130516092624-00085-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 4,421
| 50
|
https://ipvm.com/forums/video-surveillance/topics/upcoming-vms-comparison-research-testing
|
code
|
I want us to do some fundamental research to better understand how specific VMS features work. I am throwing this out to gather feedback from members on what they think and what they would like to see.
Here's a few examples:
- Video export - What's the process of video exporting? What options are available? How is video played back once exported?
- Camera discovery / addition - How does camera discovery work? What options does it provide? How about manual addition?
- Digital zoom - How do you enable digital zoom? How do you digitally zoom, pan and tilt? How do you return to the 'full' view?
I imagine there being a few dozen of these, including how do I review an alert, how do I search for video at a specific time, how do I create a view, etc.
My rationale is that I want a clear understanding of tradeoffs - are there major variances? is someone doing something really wrong? is someone missing something fundamental? etc.
VMSes to Use
The plan is to use 5 VMSes (in alphabetical order):
We'd like to use VMSes that are broadly used so that readers can maximize value from looking at their own (or likely alternatives). On the other hand, the VMS market is deeply fragmented and there are numerous other VMSes that are widely used.
We certainly plan to test other VMSes in the future and, when we do so, we can compare them to this initial group test as a baseline (e.g., Seetec's camera discovery is better/worse/different than those 5).
My idea is to publish one such comparison each week - Week 1, video export, Week 2 camera discovery, Week 3 - motion based search, Week 4 - recording setup, etc.
Let me know if you have feedback or questions.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813622.87/warc/CC-MAIN-20180221123439-20180221143439-00150.warc.gz
|
CC-MAIN-2018-09
| 1,657
| 13
|
https://www.nbi.io/docs/lookup-matches/
|
code
|
A lookup-matches test must be considered when you want to be sure that some values found in a the candidate result-sets are the same than values found in the reference result-sets.
At the opposite of equivalence’s tests, lookup’s tests don’t require the uniqueness of the rows in the reference or candidate result-sets. If a row from the candidate result-set has one or more matching keys in the reference table then the values associated to one of these matches must be equal in the reference result-set and candidate result-set. If a row from the candidate result-set has no matching key in the reference result-set, it’s not considered as an issue for the test and this row will be considered as successful.
The system under-test is any result-set representing the candidate table.
The assertion is defined by the xml element lookup-matches. Some parts of the assertion are identical to the parts defined for the lookup-exists test. See jointure and Reference result-set for more information.
In addition to the keys defined in the join element, the test is expecting to define the values that must be compared int the two result-sets. At the moment, NBi is supporting strict equality and is not supporting tolerance when comparing the values of the reference and candidate result-sets.
The definition of the mapping between the columns of the two result-sets are defined with the help of the mapping or using elements.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385389.83/warc/CC-MAIN-20210308174330-20210308204330-00484.warc.gz
|
CC-MAIN-2021-10
| 1,430
| 6
|
https://fungies.io/indie_directory/cats-are-liquid-a-light-in-the-shadows/
|
code
|
Return to all indie
Cats are Liquid - A Light in the Shadows
Author: Last Quarter Studios Author Email: https://twitter.com/lastquarterdev Cats are Liquid - A Light in the Shadows Cats are Liquid - A Light in the Shadows is a 2D platformer about a cat with the ability to transform into liquid. The game has 90 levels, that are spread across 9 different worlds. It has a minimalistic but colorful style. Along the way the game introduces new mechanics, like flying and summoning bombs to break down walls. The story is about a cat whose owner locked her in a set of rooms. She desperately wants to get out, but the rooms just keep continuing. Along the way the cat meets a new "friend" and gains new abilities. The story is told through small in game text pieces.
$5 or less
Average session length:
A few minutes
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474784.33/warc/CC-MAIN-20240229035411-20240229065411-00553.warc.gz
|
CC-MAIN-2024-10
| 812
| 6
|
http://tboxmy.blogspot.com/2009/05/first-look-at-kubuntu-904-jaunty.html
|
code
|
Lenovo R60 (9459BF8) 32 bits
CPU T5600 @ 1.83GHz
L2 Cache 1984KiB
Here are some quick comments:
This is the KDE Start Menu. One of its main menu options is the Favourites. This contain the Konqueror, Kontact, System Settings, Dolphin, Kopete and Amarok menu. Users should add frequently used programs here.
There is a desktop that takes only a small place on dashboard. There are Plasmoid widgets that can be added to the dashboard to increase user experience. to view only the Dashboard, press Ctrl+F12 (or press the Show Dashboard Applet in thebottom panel) to dim all applications and view only the widgets and Desktop.
Default Kubuntu had only 2 workspace and no keyboard shortcut to switch betweenmany workspaces. I can only return to 1st workspace with Ctrl+F1.
1. Right click Pager applet (in the bottom panel) and choose "Configure Desktop", increase to 4 workspace.
The screen is sharp and nice, which is why I have always choosen KDE when Looks matter.
Quick look at accessing applications
Web browser: Konqueror 4.2.2
Office: OpenOffice.org 3.0.1
Print screen: Ksnapshot 0.8.1 (The PrtSc button problem have been fixed)
Terminal: Konsole 2.2.2
System process: (Ctrl + Esc)
Software installer: KPackageKit
Desktop: KDE 4.2.2 (Ctrl + F12) One cool and futuristic desktop for all.
Compression tool: Ark 2.12 (Integrated well with Dolphin)
- Initially I could not login to my home wireless network. Had to manually use the iwconfig, it could load automatically there onwards. So, if you have the same problem, open a terminal and type the following to detect your wireless network (e.g. mywireless)
$ sudo iwlist wlan0 scanning essid looney
- I could resize the KPanel to a smaller size easily.
- First site I went with Konqueror was the blogspot.com and all looks good. Only problem is that the default menus during editing of post did not allow me to change the fonts. Only option allowed was spelling, Add image and Preview. A pop-up appeared with the need to install Shockwave Flash plugin but it keeps failing to install. See below for the resolution.
- Konqueror did not refresh properly when used as a web browser.
- The Amarok did not have any sound when playing an mp3. The midi file was not supported at all.
- The add/remove software manager did not seem to function. I cant get a list of software and it kept crashing. I needed to choose "Software Updates" ->Refresh. First thing I did is to install Kubuntu-restricted-extras package.
Post installation recommendations
At the terminal run
$ sudo apt-get install flashplugin-installer msttcorefonts
Open Konqueror, when it ask to install the Flash plugin this will now be successful.
$ sudo apt-get install kubuntu-restricted-extras
This will add JRE, mp3, mpeg, odbc, unrar and other useful support.
$ sudo apt-get install tpb
This will allow the Thinkpad special keys to be available. To install this package, the following package is removed:
$ sudo apt-get install gdecrypt password-gorilla
This allows the disk to be encrypted and manage passwords.
$ sudo apt-get install wine wine-gecko
This will allow installation of standard MS Windows based applications.
$ sudo apt-get install ssh sshfs ksshaskpass kdessh
Allow secure SSH connection to and from SSH servers.
Jaunty is a great Linux distro, but those who cannot accept the KDE4 changes should continue with Gnome on Ubuntu Linux.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145282.57/warc/CC-MAIN-20200220193228-20200220223228-00512.warc.gz
|
CC-MAIN-2020-10
| 3,358
| 40
|
http://crypto.stackexchange.com/questions/tagged/elliptic-curves+dsa
|
code
|
to customize your list.
more stack exchange communities
Start here for a quick overview of the site
Detailed answers to any questions you might have
Discuss the workings and policies of this site
Are there security issues with discrete logarithm keys not being uniformly distributed?
Generally, algorithms based on discrete logarithm specify that private keys are chosen as scalars between 1 and the order of the group (denoted $q$ here). For instance IEEE P1363 and FIPS 186-3 both ...
Jul 25 '11 at 15:50
newest elliptic-curves dsa questions feed
Hot Network Questions
A command to print only last 3 characters of a string
Is it possible to cool down a volcanic planet?
Word/phrase for "the one that brings bad luck" (e.g. to a group)
How to detect DBCC ShrinkDatabase completion percentage?
How to answer a professor asking "what your masters work is about?" when my thesis is not in English?
Can a 7 days full 100% CPU load "burn-in" / "stress test" damage a modern notebook?
Sources to encourage returning s'farim to their places
Dividing by 2 numbers at once, what is the answer?
Recommended books for undergraduate electrodynamics
KOMA-Script:: interference between T1 fontenc and chapterformat
Can Poodle be fixed in SSLv3 or will it go the way of TLS compression?
Why was Poland spared from the Black Death?
CSS checkbox event not working
Is there a fastest way to shutdown the system?
Polite way to break up with a chavrusa?
Why didn't Walter White consume his own product?
Teaching the fundamental group via everyday examples
Expression for losing something that you never really had
Are researchers permitted to mislead subjects about the purpose of a trial?
Short Deadfish Numbers
Ender's Game: Simulations before the graduation exam
Why are fleets so diversified?
What would mining whole chunk mean?
more hot questions
Life / Arts
Culture / Recreation
TeX - LaTeX
Unix & Linux
Ask Different (Apple)
Geographic Information Systems
Science Fiction & Fantasy
Seasoned Advice (cooking)
Personal Finance & Money
English Language & Usage
Mi Yodeya (Judaism)
Cross Validated (stats)
Theoretical Computer Science
Meta Stack Exchange
Stack Overflow Careers
site design / logo © 2014 stack exchange inc; user contributions licensed under
cc by-sa 3.0
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507445159.36/warc/CC-MAIN-20141017005725-00298-ip-10-16-133-185.ec2.internal.warc.gz
|
CC-MAIN-2014-42
| 2,255
| 51
|
https://bugs.ruby-lang.org/users/271
|
code
|
- Email: firstname.lastname@example.org
- Registered on: 12/11/2008
- Last connection: 09/07/2016
- 11:12 AM Ruby trunk Revision 56192: describe "0.class == Integer" to detect the feature.
- 04:57 AM Ruby trunk Revision 56102: replace fixnum by integer in documents.
- 08:43 AM Ruby trunk Feature #12695: File.expand_path should resolve ~/ using /etc/passwd when HOME is not set
- I think its good idea.
Although POSIX doesn't specify it for shell, we can define it in Ruby.
- 07:14 AM Ruby trunk Feature #859 (Closed): open-uri doesn't allow redirection to https
- Applied in changeset r56085.
lib/open-uri.rb: Allow http to https redirection.
* lib/open-uri.rb: Allow ...
- 07:14 AM Ruby trunk Revision 56085: lib/open-uri.rb: Allow http to https redirection.
- * lib/open-uri.rb: Allow http to https redirection.
Note that https to http is still forbidden.
- 02:08 PM Ruby trunk Revision 55634: describe RUBY_INTEGER_UNIFICATION.
- 01:25 AM Ruby trunk Feature #12217: Introducing Enumerable#sum for precision compensated summation and revert r54237
- Richard Schneeman wrote:
> Would be nice if we could match behavior with Rails Enumerable#sum https://github.com/rai...
- 07:35 AM Ruby trunk Revision 55465: update description about rb_cFixnum and rb_cBignum.
- 02:09 PM Ruby trunk Misc #10473: Date.to_datetime.to_time != Date.to_time
- Similar problem exists on Samoa (Pacific/Apia).
There is no 2011-12-30 in Pacific/Apia.
- 12:39 PM Ruby trunk Misc #10473: Date.to_datetime.to_time != Date.to_time
- Akira Tanaka wrote:
> The proposed patch seems fine.
> However I recommend to add more tests for old dates arou...
Also available in: Atom
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662698.85/warc/CC-MAIN-20160924173742-00278-ip-10-143-35-109.ec2.internal.warc.gz
|
CC-MAIN-2016-40
| 1,646
| 28
|
http://www.slacky.eu/forum/viewtopic.php?f=20&t=28386
|
code
|
5. Using The Stack
A section of your program's memory is reserved for use as a stack. The Intel 80386 and above microprocessors contain a register called stack pointer, esp, which stores the address of the top of stack. Figure 1 below shows three integer values, 49,30 and 72, stored on the stack (each integer occupying four bytes) with esp register holding the address of the top of stack.
Unlike the stack analogous to a pile of bricks growing up wards, on Intel machines stack grows down wards. Figure 2 shows the stack layout after the execution of the instruction pushl $15.
The stack pointer register is decremented by four and the number 15 is stored as four bytes at locations 1988, 1989, 1990 and 1991.
The instruction popl %eax copies the value at top of stack (four bytes) to the eax register and increments esp by four. What if you do not want to copy the value at top of stack to any register? You just execute the instruction addl $4, %esp which simply increments the stack pointer.
In Listing 3, the instruction call foo pushes the address of the instruction after the call in the calling program on to the stack and branches to foo. The subroutine ends with ret which transfers control to the instruction whose address is taken from the top of stack. Obviously, the top of stack must contain a valid return address.
Anche se non ho mai usato asm.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866894.26/warc/CC-MAIN-20180524224941-20180525004941-00524.warc.gz
|
CC-MAIN-2018-22
| 1,363
| 7
|
http://www.3daet.com/pages/740/factorial/
|
code
|
Factorials using python.
Mathematics and python.
Date Created:Friday March 09th, 2007 09:38 AM
Date Modified:Saturday August 02nd, 2008 03:32 PM
def fact(x): result = 1 while x > 1: result *= x x -= 1 return result
Download: factorial.py 107 B
Please login or Click Here to register for downloads
Factorial by Dan Lynch
is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
Based on a work at www.3daet.com
Permissions beyond the scope of this license may be available at http://www.3daet.com
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201707.53/warc/CC-MAIN-20190318211849-20190318233849-00278.warc.gz
|
CC-MAIN-2019-13
| 539
| 11
|
https://linuxgsm.com/category/linuxgsm-news/
|
code
|
This release adds 2 new game servers Counter-Strike 2 and The Front as well as a rework of alerts.
The with release of Counter-Strike 2 Valve has made available a Linux dedicated server. However, there are a few caveats currently.
Firstly, Valve has not officially announced CS2 community game servers and are still listed as “coming soon”, which is why CS2 server currently seems incomplete (but working). So, treat CS2 server as a “work in progress”.
Secondly, the dedicated server is currently only available via the game client. This means that the whole CS2 game client is downloaded (33GB) with the game server
as well as a requirement to log in to a Steam account and add CS2 to the account to download the server. I recommend creating a new Steam account just for your game server, check out the docs for info about Steam accounts. When Valve changes to using the anonymous server appid I will update the server. Update: anonymous login is now available
CS2 replaces CS:GO so the LinuxGSM CS:GO server will be will be deprecated in a future release.
I have also noticed there are not many community Counter-Strike 2 servers currently available on the server list with it mostly being filled with fake servers. Hopefully, Valve tidy things up soon, and community servers fully return.
I have refactored alerts to improve the consistency, look (new colours), and wording of alerts. New alerts can also be enabled (
statusalert) to trigger when a server is started, stopped, and restarted as well as new alerts for backups and wipes (Rust). Game icons have also been added to supported services. The More Info button is now hidden if unused and Game Tracker links have been removed.
Legacy Code and Deprecation
In this release, most code used for legacy purposes has been removed. This is mostly allowing old settings from _default.cfg to continue to work. Most of the legacy code is around 2-3 years old and users should have now migrated to new setting names by now.
I have also decided to deprecate Last Oasis. This is due to the game being abandoned, the game server no longer functioning properly, as well as abysmal user reviews. From time to time game servers that are broken and unused will be removed as each game server takes time to maintain with problematic game servers often taking up a disproportionate amount of time.
Mailgun has also been removed as an alert type as it is now a paid service as well as stats showing for a long time it was completely unused.
I spend a lot of time thinking about the user experience of LinuxGSM and am slowly working on improving the consistency, look, and feel of LinuxGSM as well as streamlining code development. It is a surprisingly difficult task and requires careful consideration. Over the next few releases, you might notice small UX changes to various areas of LinuxGSM. In this release, I have reworked sleep timers to allow me more control of the timing of what is displayed on the CLI. LinuxGSM should feel faster in places or provide more time to read messages when required.
The LinuxGSM Docker image continues to improve the more it is being used. The docker containers have massively helped me with development as I am now able to quickly deploy and run all 130+ servers, allowing me to identify servers that have broken (I hate the discord alert sound now) as well as test code across all game servers at the same time.
If you want to try it out you can check out LinuxGSM docker at Docker Hub.
LinuxGSM Discord Bot
Did you know that LinuxGSM has its own discord bot? You can use it to query your game server as well as the Steam master server. This allows you to check your game server can be seen from the internet. Join the LinuxGSM Discord server to try it out.
Thank you to everyone who sponsors the project. I have had a recent uptick in new sponsors that have decided to support my development work. If you would like to sponsor me please visit my GitHub Sponsors page. It is also now possible to use sponsors via Patreon from GitHub Sponsors.
And finally as always, thank you to everyone who takes the time to help and support the project.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474688.78/warc/CC-MAIN-20240227220707-20240228010707-00758.warc.gz
|
CC-MAIN-2024-10
| 4,127
| 20
|
https://www.mitacs.ca/en/discipline/engineering-computer-electrical?page=54
|
code
|
Attacks on computer networks happen every day, but many go undetected. Not all attacks succeed, but the ones that do often leave so called “back doors” behind that allow the attackers to easily gain access back into the computer network without having to attack it further. This project focuses on the use of mathematics and statistics to determine what features of network traffic (the data flowing on the wire between a computer network and the rest of the internet) can be used to determine if an unauthorized back door is present in a computer network.
The proposed research aims at developing control strategies under the paradigm of Demand Response (DR) in the context of the Smart Grid in order to improve energy efficiency and to reduce operational cost in commercial buildings and communities. The emphasis will be put on consumer side energy management strategies that able to balance energy demand and supply and to reduce the overall operational cost while providing an enhanced performance.
This project aims to identify and fix the gaps in existing business processes of commercial insurance brokers group. In order to improve the efficiency and effectiveness of the processes, I plan to redesign the processes to eliminate bottlenecks and improve the service quality by developing workflow management system for the organization. Workflow management system deals with supporting business processes in the organization.
User experience and battery life are key concerns for smartphone makers. Given the growing trend of graphic-rich applications on mobile devices, embedded Graphics Processing Units (GPUs) are increasingly being incorporated in smartphone hardware platforms. In this project the intern will develop fast, early and accurate models of embedded GPUs, before the GPU hardware is available.
The project is aimed at implementing real-time processing techniques for video acquisition, compression and transmission. The project focuses on defining solutions in order to solve constraints within the system related to bandwidth and the battery energy of the sensor. Depending on the application, the acquisition and compression procedures could be based on extracting significant features and compressing them in a lossless fashion followed by data transmission over a wireless channel.
An electrical power system is designed to provide safe and reliable supply to customers. However well designed the system, disturbances are unavoidable during the operation and the system should be able to continue secure operation. In fact, if it can be early determined that the system is moving towards an unsecure region, the operators can take necessary safety actions to keep the system secure. Thus, the main goal of this study is to develop novel techniques to monitor the stability of an electrical power system in real time.
Ultra large software systems play an increasing important role in our lives. They are systems such as the world wide banking system, mobile communications systems, social networks, online retailers and online gaming systems. Ultra large software systems are critical and failures in the systems can critically impact the economic health of companies, markets and even countries.
This proposal presents research projects to evaluate a new technology, Electrovestibulography (EVestGTM) that holds potential to objectively, quickly and quantitatively measure the severity of concussion, thus aiding in its diagnosis. EVestG signals are recorded painlessly and non-invasively from the external ear in response to a vestibular stimulus; they are the brain signals modulated by the vestibular response. When concussed, people commonly experience balance (vestibular) problems and dizziness, as well as confused thinking.
The proposed internship, developed between the University of Waterloo and Philip Beesley Architect Inc. (PBAI), will develop and validate prototypes for novel expressive interactive sculpture environments. The work incorporates human perceptual studies and machine learning techniques for generating models for perception and generation of affective expression within experimental architecture and installations, and systematically deriving the relationship between affect and structure.
The ultimate aim of this project is to design and develop methods and tools for classifying attributes of books such as genre, style, tone, and likelihood of being popular. Towards this end we will make use of various information types available on books and users of the Kobo catalog, including the text, meta-data associated with the text, and user features associated with readers of the text. This is a large undertaking.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649841.6/warc/CC-MAIN-20191014074313-20191014101313-00274.warc.gz
|
CC-MAIN-2019-43
| 4,678
| 10
|
http://ucancode.net/Database-Diagram-Component.htm
|
code
|
Database Diagram Component C++ Source Code
Here are some useful add-in related
Diagram Component C++ Source Code
I've started collecting. This section is still very much under
construction, and more links will be appearing in the future. If any
Diagram Component C++
dies, please contact
Create interactive flowchart and workflow
diagrams. E-XD++ Suite lets you quickly build flowchart enabled
applications like workflow diagrams, database diagrams,
communication networks, organizational charts, process flow,
state transitions diagrams, telephone call centers, CRM
(Customer Relationship Management), expert systems, graph theory,
quality control diagrams, etc. E-XD++ Suite supports distinct colors,
fonts, shapes, styles, pictures, text, and so on for each object of the
flow chart diagram (node or link). E-XD++ Suite also supports custom
shapes, metafiles, serialization, multi-level undo/redo, print, zoom, Bezier
and spline curves, reflexive links, link jumps, link autorouting,
multiselection, scrolling, user data association and more. Visual Studio
2005 (VS2005, VS 2005) and Visual Studio .NET 2003 compatible. Royalty free
redistribution and many samples included for Visual Basic .NET (VB.NET) and
UML Database Diagram
Tool with 100% C++ Source Code!
Introducing UCanCode XD++
MFC Library, a breakthrough solution that provides the most
cost-effective way to publish, process and securely manage database
diagram drawing. Full .NET
First time here?
E-XD++ Database Diagram Component product walkthrough
Applications built on E-XD++ Database Diagram Component
Powerful, flexible, and
easy to use Diagram Components.
Powerful and flexible enough to create diagrams exactly the way you want
them to appear. So easy to use that you will be able to prototype your
application in just a few minutes.
With features such as automatic layout, multiple layers, collapsible
sub-graphs, snap-to connection points, XML, SVG, and more, E-XD++ Have the
power and flexibility you need to create sophisticated diagrams, quickly and
easily. Events such as click, double-click, hover, select, rubber-band
select, copy, delete, resize and move are supported. Operations such as
drag-and-drop, unlimited undo/redo, and clipboard operations are common and
complex, and are expected by today's sophisticated users. it full supports
importing ArcGis, SVG and DXF File format.
UCanCode E-XD++ Capable of handling many thousands of nodes and edges, up to
hundreds of thousands depending upon the complexity of the nodes you wish to
draw and the operations you wish to allow. Our graphical classes are
extremely lightweight objects enabling outstanding performance.
Save Time and Money and gain Reliability.
A diagram is worth 1,000 words, and E-XD++ is shipped with more than 500,000 lines of well designed and well tested code! It is used by hundreds of the world's most quality conscious companies. It will saves you thousands of hours of complex coding and years of maintenance.
MFC/Database Diagram Component Enterprise Edition
is the the world’s
leading MFC/C++ visualization component (MFC/Database Diagram Component
Source Code). Renowned for incredibly
rich graphics, E-XD++ MFC/Database Diagram Component
Source Code helps developers build applications that offer
|| Outstanding productivity lowers project
risk and reduces maintenance headaches. With 8 years of dedicated
research and development, UCanCode leads the market for
visualization technologies, providing outstanding customer support.
with E-XD++ MFC/Database Diagram Component
Source Code Enterprise you can easily build Visio
2003 like applications.
MFC/Database Diagram Component
Library Professional Edition
is a MFC/Database Diagram Component
Source Code for developing Microsoft Visio like interactive 2D graphics and
diagramming applications. E-XD++ MFC/Database Diagram Component
Source Code stores graphical objects in a node
(scene) graph and renders those objects onto the screen.
C++ MFC Library
E-XD++ MFC/Database Diagram Component
product supports both vector and raster graphics on the drawing
surface. E-XD++ MFC/Database Diagram Component
Source Code includes all the features of XD++
MFC/Database Diagram Component Source Code
Professional Edition, it also includes many new important feature of
or Visio 2007.
Extensions, Visual C++ Component, Visual C++ ToolKit, Controls
,Diagramming Class Library,
ActiveX Control is an ActiveX control that allows creation
and editing of Visio-style
charts from within your application. Allows you to create Database Diagram
Components, vector drawings,
raster images and more with the ability to
include hyperlinks and various shading and coloring effects.
||The base framework of
UCCDraw ActiveX Control is XD++ MFC/Database Diagram Component
group objects together, include images and text, link them together
and apply custom drawing effects to create charts similar to
Microsoft Visio, Adobe Illustrator, and CorelDRAW, Java,
MFC/Database Diagram Component Gives you all the components your development
team needs to display or select date and/or time values in any
|| By including a Month calendar control, a Year calendar
control. TFC/C++ makes it easy to incorporate robust calendar features
in your program's interface today.
Having graphical editing capabilities can
be a real asset, if not an essential feature, for many tools and
applications. Examples are not hard to think of: UML tools, GUI builders,
in fact, any application which comprises a dynamic model which can be
visualized. With XD++, VC++ developers have at their disposal a
framework which can really simplify development of graphical
The diagram represents whole database as
easy to understand graphical objects. The diagram appearance can be
adjusted according to your requirements and wishes. You can set colors of
table background, table lines and references. You can configure table
information representation: enable showing of column and indexes icons,
displaying indexes, keys, foreign keys and other.
The following is a new sample that build with the latest edition of
E-XD++ Enterprise V11.0, 100% VC++/MFC Source
Codes is shipped with full edition:
E-XD++ is the best
component with database diagram
design and drawing
in visualization component -- FULL VC++ Source Code Shipped!
XD++ Diagrammer Suite is the the world’s
leading VC++ and .NET visualization component. Renowned for incredibly rich
graphics, XD++ helps thousands developers build applications that offer
unparalleled functionality. Outstanding productivity lowers project risk and
reduces maintenance headaches. With 20 years of dedicated research and
development, UCanCode leads the market for visualization technologies,
providing outstanding customer support.
1 UCanCode Advance E-XD++
CAD Drawing and Printing Solution
Source Code Solution for C/C++, .NET V2023 is released!
UCanCode Advance E-XD++
HMI & SCADA Source Code Solution for C/C++, .NET V2023 is released!
Advance E-XD++ GIS SVG Drawing and Printing Solution
Source Code Solution for C/C++, .NET V2023 is released!
Contact UCanCode Software
To buy the source code or learn more about with:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950030.57/warc/CC-MAIN-20230401125552-20230401155552-00555.warc.gz
|
CC-MAIN-2023-14
| 7,126
| 130
|
https://p2p.wrox.com/asp-net-2-0-basics/47068-title-property-webpart.html
|
code
|
"Title" property of a WebPart
How to set the "Title" property of a WebPart ? I don't see this property in the Properties Inspector. I must add it directly in the source :
<asp:Label ID="Label1" runat="server" Text="Label" Title="Le mot du jour"></asp:Label>
VS says : attribute "Title" is not a valid attribute of element "Label"
but i have no execution error. Of what element is the attribute "Title" ?
Thanks for your help
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375274.88/warc/CC-MAIN-20210306162308-20210306192308-00172.warc.gz
|
CC-MAIN-2021-10
| 424
| 6
|
http://www.coderanch.com/t/354704/Servlets/java/jspsmartupload-upload-image
|
code
|
Hello guys.. I've spent hours trying to figure out how this 'utility' works... maybe you can give me a hand. this is the scenario: * I want to use jspsmartupload to upload an image to a database. I can't find any examples .
as you can see guys I have no clue how to achieve what I want. any suggestions or ideas would be appreciate. thanks
I'm not going to be a Rock Star. I'm going to be a LEGEND! --Freddie Mercury
Hi, After the first look at your code for the servlet.. Where is doGet(HttpServletRequest req, HttpServletResponse res) and doPost(HttpServletRequest req, HttpServletResponse res) methods. It has to be there and it should simply call the processRequest(...) method of yours passing request and response objects. Try above changes and let me know..
Joined: Nov 27, 2001
Hi Lakshmeenarayana (I copied and pasted your name ) Yes, the doGet and doPost methods exist, and they only do the call to process. I just wanted to show the main portion of the code, which is the one I'm having problems with, specially with the smartupload. cheers
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398461132.22/warc/CC-MAIN-20151124205421-00015-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 1,051
| 6
|
https://research.facebook.com/people/strazzulla-ortega-daniel/
|
code
|
I’m a UX Researcher on the Messenger Team focused on interoperability between different messaging platforms. As a human-computer interaction specialist I use methods and resources from different disciplines like computer science, sociology, psychology, and design, to understand how people use technology to communicate with each other.
I believe that people appropriate and customize communication applications to better serve their communication needs depending on who they’re interacting with. The challenge then becomes to design and implement communication solutions that are private, usable, extensible, and customizable.
Prior to joining Facebook I received a MSc. in Computer Science from Stanford University, and pursued PhD studies at the University of Paris-Sud in France. I also had the chance to study live streaming content creators and their communities as a researcher at Twitch.
Social networks, social computing, computer mediated communication, online bonds, ethnography, emerging markers, messaging, privacy, accessibility, design, design research, live streaming, and online communities.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655027.51/warc/CC-MAIN-20230608135911-20230608165911-00021.warc.gz
|
CC-MAIN-2023-23
| 1,112
| 4
|