url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
https://www.geekzone.co.nz/forums.asp?forumid=48&topicid=268346
|
code
|
I'm trying to track down an article from a Reader's Digest edition, fairly sure it was in the late 1970's that I remember reading when I was younger. From memory a guy took down notes of an aeroplane hijacking (sort of like a diary) and it was published as a story in an edition.
Try as I might though, I can't seem to find any online, searchable edition of the magazine. Does such a thing exist, or (possibly) does anyone happen to remember the article I'm thinking of, and know what year/month it was printed? I was hoping to read it again.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657147917.99/warc/CC-MAIN-20200714020904-20200714050904-00432.warc.gz
|
CC-MAIN-2020-29
| 542
| 2
|
https://ledgerleopard.com/blockchain-glossary/block-2/
|
code
|
A block in blockchain is a data structure that holds batches of transactions, securely linked to form an immutable chain.
Understanding the Block in Blockchain
A block in blockchain technology is a digital ledger entry that records and secures transactions. Each block is connected to the one before and after it, creating a chronological and unbreakable chain of data when joined together.
What is a Block in Blockchain?
Imagine a block as a digital container, akin to a box filled with transaction records—much like a page in a ledger. Once a block reaches its capacity, it’s encrypted, sealed, and linked to the chain, akin to adding another car to a train.
What Information Does a Block Contain?
- The header, which includes metadata such as timestamps and a unique identifier.
- The body, housing a list of transactions with varying data types.
Block sizes differ across blockchains, with Bitcoin’s current block size at 4MB, while others may be larger to hold more transactions.
Enhancing Security with Blocks
Blocks bolster blockchain security. Altering a single block would require changes across all blocks, a near-impossible feat due to the cryptographic linking of blocks via hashes.
Adding New Blocks to the Blockchain
New blocks undergo a verification process using a hash function, creating a unique code that links to the previous block. This ensures any data changes are easily detectable. Blocks are added through consensus mechanisms like proof-of-work or proof-of-stake, where network nodes agree on the block’s contents.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816942.33/warc/CC-MAIN-20240415045222-20240415075222-00409.warc.gz
|
CC-MAIN-2024-18
| 1,548
| 13
|
https://www.technipages.com/what-is-the-cpu-cache/
|
code
|
Modern CPUs run incredibly fast; they can significantly outperform the system RAM. This speed imbalance between CPU and memory would cause your processor to often sit idle, waiting for data to be sent to it so it can continue running a process. To prevent this from happening, allowing CPUs to continue to run faster and faster, a CPU cache is used.
How does a CPU cache speed up a CPU?
The CPU cache is designed to be as fast as possible and to then cache data that the CPU requests. The CPU cache has its speed optimised in three ways: latency, bandwidth, and proximity. The CPU cache operates at very low latencies, minimising the amount of time it takes for a result to be returned. For example, the Intel i9-9900k has a cache latency of 0.8, 2.4, and 11.1 nanoseconds for the L1, L2, and L3 cache respectively. In comparison, the latency of modern high-speed RAM is on the order of 14 nanoseconds.
Tip: The cache levels will be explained in more detail later, but simply put the lower layers of cache are faster but are more expensive so have lower capacities. A nanosecond is a billionth of a second, so a latency of 0.8 seconds means that it takes less than a billionth of a second to return a result.
In terms of bandwidth, the CPU cache offers significant performance improvements over traditional storage and RAM. Read speeds of the L1 and L3 cache can peak at 2.3 TB/s and 370 GB/s respectively, while the bandwidth of RAM is typically around 40 GB/s. This increased bandwidth means that the CPU cache can transfer data to the CPU a lot faster than RAM can.
To achieve the maximum possible speeds the CPU cache is actually built into the silicon of the CPU die itself. This minimizes the distance that any electrical signals need to travel, therefore keeping the latency as low as possible. For example, when the L3 cache was first moved from the motherboard to the CPU die, the processor of the time (Pentium 4 EE) was able to gain a 10-20% performance improvement.
CPU cache architecture
Modern CPUs generally use three layers of CPU cache labelled L1-3, with lower-numbered caches being closer to the CPU cores, faster, and more expensive. Each individual CPU core in a multi-core CPU has its own L1 cache. It is typically split into two portions, the L1I and L1D. The L1I is used to cache instructions for the CPU while L1D is used to cache the data on which those instructions are to be performed.
Each CPU core typically also has its own L2 cache on a modern CPU. The L2 cache is larger and slower than the L1 cache and is used primarily to store data that wouldn’t otherwise fit in the L2 cache. By having a dedicated L2 cache per core, cache contention is avoided. Cache contention is where different cores fight to claim cache space for their own workloads, which can lead to important data being cleared from the cache.
The L3 cache is typically shared between all the CPU cores of the processor. Again, the L3 cache is slower than the L2 cache but is cheaper and larger. By providing a shared cache it’s possible to reduce the amount of data that would be duplicated on lower levels of per-core cache.
Tip: As an example, in cache sizes, Intel’s i9-9900K has a 64KB L1 and a 256KB L2 cache per-core (for a total of 512KB L1 and 2MB L2), it also has a 16MB shared L3 cache.
How is the CPU cache used?
All levels of the CPU cache are used to speed up processor performance by caching data from RAM. When a CPU requests data it typically searches through its cache layers first in an attempt to get the data as fast as possible. If the data is found in a cache hit, then the CPU can continue its processing. If the data isn’t in the cache, in what’s called a cache miss, then the CPU has to check the RAM, and then the hard drive if the data isn’t there either. The faster layers are always checked first for maximum performance.
To help the CPU have the data it needs in the cache when it needs it, the cache attempts to pre-empt what data the CPU might need next. For example, if the CPU has requested some data for an image it’s rendering the cache may try to pre-emptively cache more of the image data so it can be fed to the CPU as fast as possible.
Did this help? Let us know!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655143.72/warc/CC-MAIN-20230608204017-20230608234017-00440.warc.gz
|
CC-MAIN-2023-23
| 4,215
| 15
|
https://it.toolbox.com/blogs/madgreek/book-review-soa-governance-by-todd-biske-110108
|
code
|
My friend and colleague
Todd Biske has recently published a must read book about
SOA Governance. Most books that discuss process can require a ton of caffeine in order to make it from cover to cover. Not this book! Todd uses a unique style of combining a story of a fictional company along with his well served advice. He walks us through a multi year SOA project with a fictional insurance company called Advasco. Advasco, like many of the companies that we work for, has years of silo legacy applications that are the direct result of acquisitions, mergers, and years of developing in silos. The goal of their initiative was to "provide sales agents and marketing staff for the insurance division with a single view of the customer". Todd takes us through the evolution which spans five years. Over this time, you can see the company make continual strides towards the overall vision but not without challenges. With each challenge comes another opportunity to mature their governance model. At the end of the book, Advasco's IT staff have made a major impact to the company's bottom line and ability to react to market changes. This is one of the few books that actually shows us what success looks like. This is why the book is so good. Many books talk about processes, structure, and controls but fail provide us with a glimpse into what the fruits of that labor looks like. Todd takes us from project inception, to successes, to set backs, to mitigation strategies, and ultimately to enterprise wide success. By taking us through this journey we can get a better understanding of the impact of the recommendations that he makes for SOA Governance. I won't go into details on what he recommends. I will let you read it yourself. But I can guarantee that you will enjoy the book and will be able to relate your real world experiences with the experiences of the fictional characters who helped move Advasco forward by leveraging SOA. What made Advasco successful? If you follow everything that Todd recommends in this book, you still are not guaranteed success. There were some specific events and characteristics that made this journey a success. Here is a short list:
Initiative was driven top down
Very little resistance to change
Great working relationship with business and IT
Very talented staff that was willing to learn
Effective EA team
Project was business focused
Final Thoughts SOA Governance is a must read for any company preparing for SOA or for companies struggling to make their SOA initiative successful. Follow the advice and examples that Todd provides but also address the bullets I listed above. If you have the great staff that Advasco has you will find success. But in the real world, the effort to promote this type of change in the enterprise will usually require much more focus on change. At the
SOA Consortium meeting last month in Orlando, Todd provided us with a great presentation about SOA Governance. I followed that up with this presentation on change.
SOA & Change
Upload your own. (tags:
If you focus on Organizational Change Management and heed Todd's advice on SOA governance, your odds of succeeding with SOA will be greatly enhanced. Buy the book, it is the best book on governance that I have read so far.
16814-081101-852740-87 Rate content:
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816462.95/warc/CC-MAIN-20180225130337-20180225150337-00761.warc.gz
|
CC-MAIN-2018-09
| 3,288
| 15
|
https://connect.mozilla.org/t5/discussions/shortcuts-in-its-current-state-on-firefox-mobile-is-just-wrong/td-p/42189
|
code
|
I changed my phone yesterday and understood how resigned I have been for the last two years to how broken this function is. I don't even know why I waste my time writing this, there has been many good and constructive feedback here and on bugzilla, but nothing happens because "thats intended behavior".
Firefox wants to stand for privacy and customization, but shortcuts doesn't stand for any of that.
Mixing shortcuts with top sites: all of the sudden some random stuff from my history is displayed on the front page of my browser? WHY? I don't want random people who I want to show something on my phone to see what I did last night.
I have to add unwanted Shortcuts, just to not show top sites on the first page.
Why can't I long-press and rearrange them? This breaks with every basic touch UX behavior.
Shortcuts don't sync like everything else
In some Bugticket a dev said this is supposed to reflect Desktop behavior, then why is shortscut not an special bookmark folder which I can sync and edit on Desktop like the Bookmarks toolbar folder?
Not being able to edit the URL on given shortcuts is also a basic feature missing
fixed 16 slots is not customizable at all
I'm sorry to rant like this but I just can't understand it. How some function gets rolled out missing basic features not getting fixed in over a year and on the other hand many man hours get assigned to stuff like new background colours. 🤦♂️
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475311.93/warc/CC-MAIN-20240301125520-20240301155520-00141.warc.gz
|
CC-MAIN-2024-10
| 1,422
| 10
|
http://twinery.org/forum/discussion/2357
|
code
|
I've been making a JRPG-like combat system. I previously tried doing it in Twine 1.4, but I had trouble figuring out Macros. I strung together some of the currently implemented features into a narrative:https://grippli.neocities.org/rpg combat.html
I'm at a point where in order to add new features, I need to decide what stats combatants will have and what those stats do. Currently they just have Attack Power and Hit Points. One monster also has a non-zero Magic stat for testing purposes.
All in all an interesting learning exercise.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534693.28/warc/CC-MAIN-20220520223029-20220521013029-00374.warc.gz
|
CC-MAIN-2022-21
| 537
| 3
|
https://stackoverflow.com/questions/25417627/pausing-a-sprite-kit-game-and-transitioning-to-home-menu
|
code
|
I am developing a game in Xcode's sprite kit that I intend on shipping to the AppStore. I have a gameplay scene where all of the objects are SKSpriteNodes and are children of a single SKNode *nodePlay. One of these is a button that pauses the game and brings up a custom pause menu. In the pause menu, the user can resume, restart, go to settings, go to main menu, et cetera. An important feature I want to retain is the state of the game when we pause it, such as the level/timers/ position of objects executing SKActions.
I have found several ways to do this but I wanted to ask frankly if there is a 'best'/preferred method and why . Here are just a few that I came up with myself/ across research:
1) Use two view controllers. The first runs only the gameplay scene, and the pause button transitions the user to the second view controller, where the other SKScenes will appear (settings scene, home scene, ...). This way when I set nodePlay.paused=true on the first view controller and transition to the second view controller by poping, the state of my first view controller will be kept (including suspension of any SKActions) when I pop back.
2) Combine the play SKScene and the pause SKScene as a single SKScene so when we pause the game, the current view will just bring up an SKNode *nodePause with several SKSpriteNode/SKLabelNode children that act as the buttons (I'll set nodePlay.paused=true and set alpha to 0.5 and zposition=-10 so it is clearly in the background).
3) Archive the gameplay SKScene and transition between scenes with something like:
SKScene *pauseScene = [[MyPauseScene alloc] initWithSize:self.size]; [self.view presentScene:pauseScene];
and if we resume, we will restore the archived scene. A major issue I have read on this is that it does not save the state of SKActions (if I have moveBy, it will move objects not as desired or if I have moveTo over duration, the object will be moved very slowly...)
If there is a better way that I haven't listed I'd really appreciate the advice - thanks!
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000044.37/warc/CC-MAIN-20190626013357-20190626035357-00506.warc.gz
|
CC-MAIN-2019-26
| 2,027
| 8
|
https://johnarthur.wordpress.com/2008/03/25/a-high-resolution-ip-webcam/
|
code
|
Web cams are fairly ubiquitous things these days and by no means expensive. They can be good or bad depending on how much money you want to spend, but there is one almost-universal rule, which is that they connect to a host PC over USB. IP-based cameras that connect to a LAN via an RJ45 connector or wirelessly over 802.11 are quite a bit more useful, because they can be put almost anywhere, but they tend to cost a surprising amount and not provide much resolution. A low-end one like the LevelOne FCS1030 is NZ$260, and they go a lot more expensive than that. For example, the wireless D-Link DCS5300 sells for NZ$930 and it only does 320×240 pixels.
After all, IP cameras have a processor in them, and do web serving, and that… well, that really isn’t a reason for them to be expensive. Some sort of premium seems to go on anything connected with the security industry.
As an illustration of this, the Linksys NSLU2 is a little network storage controller which does web and file serving. It sells here for about NZ$145. Its a complete computer with a 266MHz RISC CPU, 8M of flash and 32M of RAM, two USB ports, and an RJ-45 connector. Not much by todays standards, but not very many years ago those would have been respectable specs for a desktop. It runs off a 5V plugpack. The really interesting thing about it is that the open source community has developed something like five alternative firmware distributions for it which are very capable, and can do almost anything that the hardware will allow. Including acting as a camera server for an attached webcam.
Webcams have also made some interesting strides forward recently. A number of new webcams with true resolutions greater than 1 megapixel have appeared, and a standard for USB web cameras has finally been agreed on. This standard, known as the USB Video Class or UVC, means that newer webcams no longer require proprietary drivers but will work with a generic UVC driver. The Logitech Quickcam Pro 9000 is a good example of a UVC camera, capable of up to 1600×1200 pixels complete with optics to match. It sells here for about NZ$128.
I have been working on combining these two devices with one of the open-source firmwares that has the rather disconcerting name of OpenWRT Kamikaze. Despite the name it has proved to be quite reliable and full-featured, although not terribly well documented, as it is constantly being worked on and improved. Recently it has got to the point where it can easily make the NSLU2 and the QuickCam 9000 work together to make a high-resolution IP camera server, for a total of NZ$275. As it stands the combination is only suitable for indoor use, and it has some limitations, but it still produces a very good image for the money and it works with any web browser — or with my Linux-based ZoneMinder security camera software. If you want indoor high-resolution imaging over a LAN, this could be quite useful. Compiling OpenWRT and selecting the necessary bits is a bit involved and can take a few hours, so I am making the end product, a firmware build called openwrt-nslu2-uvc-webcam1.bin, available here.
To use it yourself you will need:
- An NSLU2
- A UVC-compatible webcam such as the QuickCam 9000
- A host computer running Linux
- The firmware image
- Access to a DHCP server on your LAN
Installing the replacement firmware on the NSLU2 is a Bit Technical, but not difficult. You will need to run a tool called UpSlug2 to install the firmware. UpSlug2 is described here. I would recommend running it from Linux. I used my Ubuntu machine, but booting a Knoppix Live CD on a Windows machine should work fine. UpSlug2 can also be used directly under Mac OS X: see here. Under Ubuntu, getting UpSlug2 is as simple as
apt-get install upslug2
The next step is to power-up the NSLU2 in “upgrade mode”. It should be connected to the same LAN as your Linux machine, but do not plug the camera in at this stage. Use a straightened paper clip or other thin probe inserted in the Reset hole at the back of the NSLU2. Apply gentle pressure and the reset switch will click in. Hold it down and press the power button at the front. The Ready/Status LED will come on orange, but after nine or ten seconds will turn red. Release the reset switch as soon as this happens. The Ready/Status LED should start alternating red and green, indicating that the device is ready to upgrade. Now issue the command:
sudo upslug2 --image="openwrt-nslu2-uvc-webcam1.bin"
If all is well and UpSlug2 can find the NSLU2 you should see a nice animated summary of the upgrade process, which will take about one minute twenty seconds. When it finishes you should wait for three more minutes for it to perform initial setup. You can then disconnect the power, plug in the USB webcam (to the USB port nearest the LAN socket – the other one isn’t powered), and power it up again. After about 55 seconds the red ring light on the camera indicating that it is recording should come on.
The firmware has been set to use DHCP to dynamically determine its IP address. This has the advantage of working in almost all network setups, but it does mean that you will need access to the DHCP server log on your local LAN to find out what IP address it has been given. The logs should show a DHCP discover/offer/request/ack sequence at the time the NSLU2 powers up, which will tell you what IP address it was given.
The log will also give its Ethernet or MAC address (which is on a label on the outside of the box as well, if you still have it). If you have access to the DHCP servers configuration files, you can tell the DHCP server to give the NSLU2 with that Ethernet address a consistent IP address at which it can always be found.
Once you have the IP address, use any web browser to go to http://<IP address> (e.g http://192.168.10.154) and you will find an About page for the M-JPEG streamer software, offering various forms of static or moving images from the webcam, which is running at a resolution of 960×720. For an unadorned single frame, try http://<IP address>/?action=snapshot.
A sample image looks like this
Which is not bad for a webcam at all. A bit fuzzy around the edges, but resolution in the middle is fine. Your cat may look different.
This firmware should work with any UVC-compatible webcam, which includes a number of models from Logitech, Creative, Philips, and other manufacturers. A list of compatible ones can be found here [Warning; the Microsoft Lifecam NX-6000 is listed, but I am told it doesn’t work]. The firmware will not work on anything other than an NSLU2. I intend in due course to describe how to build the firmware in another post, but I had to build the “trunk” version to do this and I’m hoping that a formal release of OpenWRT that supports the mjpg-streamer package that makes this possible will come out and make the whole process much easier.
The NSLU2/QuickCam combination is a powerful one that could obviously be used for a good deal more than a simple image server. The NSLU2 isn’t powerful enough to run anything like the ZoneMinder software except at low resolutions, but it could still be used as a stand-alone remote survey camera with a few additions. I also intend to post about this in the future.
I won’t actually be using this camera with my ZoneMinder setup because it is a bit too demanding. IP cameras require more CPU cycles from ZoneMinder than directly-connected cameras, because ZoneMinder has to decode the JPEG format that the images are sent over the network in. My 1.2GHz Pentium III can only just manage 4 frames a second from this camera at 640×480 with not enough margin to handle another camera. I would guess that if you wanted to use several of these with a ZoneMinder setup nothing short of a 2.5GHz dual-core CPU would be enough.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00485.warc.gz
|
CC-MAIN-2022-33
| 7,795
| 24
|
https://supportforums.cisco.com/t5/security-and-network-management/security-for-bridge-connectivity/td-p/426344
|
code
|
Yes you can. Under the SSID configuration you can specify an authentication username and password. This can be an added level of security on top of MAC authentication. You can also restrict those user/mac accounts to be the only ones that can authenticate to your infrastructure SSID. A lot of layers.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687642.30/warc/CC-MAIN-20170921044627-20170921064627-00106.warc.gz
|
CC-MAIN-2017-39
| 301
| 1
|
http://www.avsforum.com/t/1474305/samsung-pnxxf4500-owners-thread/120
|
code
|
I just picked up one of these sets from WalMart (PN51F4500) yesterday, and like what I see so far. I have it hooked to my HTPC (Intel Ivy Bridge graphics, HDMI, 1024x768 @ 60 Hz). One weird thing I noticed was in certain colors, not all, but darker (not black) colors, the pixels... dance? squiggle? wiggle? sparkle? I don't know. Is this a defect? (Doesn't looks like it) The pixels actually MOVE, so it's not purely a dithering effect. It's almost like Gnats buzzing around your head. Pixel noise. Is it a part of the panel design? (my old Dell LCD would do this because it was a 6-bit panel trying to display 8-bit per pixel colors). Would the 1080p panel have these problems? (PN51F5300, specifically)
Here's an example on text:
And on a dark background color:
Other than that, it's a nice panel. I bit close to the SDE threshold, but usable. Heck, for the $ its a great deal. I'm replacing my old Vizio VP322 (32" Plasma), which started to get IR after just a few seconds of static display.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011103144/warc/CC-MAIN-20140305091823-00009-ip-10-183-142-35.ec2.internal.warc.gz
|
CC-MAIN-2014-10
| 995
| 4
|
https://community.microfocus.com/it_ops_mgt/ucmdb/f/sws-cms_sup/174974/reconciliation-priority---attribute-override-for-a-discovery-job---not-working
|
code
|
Issue seen on both UCMDB 2019.02 and 2019.11
Looking at the latest UCMDB Best Practices Maintenance document, there is a section about setting a job priority on an attribute ("Check for "flipping" data" chapter).
"You can use the Reconciliation Priority function of UCMDB to prioritize which job has higher
priority over others.....Starting with CMS 2018.11, you can now set reconciliation priorities by individual discovery job, allowing you to prioritize one discovery job as having priority over others. A common use
case would be the Host Connections by Shell versus Host Connections by SNMP...."
Then the doc actually shows some screenshots where it is evident they were able to set the priority of some jobs for a Node.MemorySize attribute.
This feature is also mentioned in the official UCMDB 2019.11 docs:
The instructions are not good, it doesn't say you have to change to "Discovery" from the pull down menu on top of the screen, above CI Types pane. Nevertheless, when i change to "Discovery" and of course select i.e. Node from the CI types, i do not see any discovery jobs, or when i click a plus sign on the Attribute Overrides and select an attribute i.e. Name, the job pull down list is empty.
So, obviously this feature does not work as described, and it is also not well described in the docs.
I did find a workaround: you have to go through the job itself. You go to the Data Flow Management -> Universal Discovery, find a particular job (the one you want a priority to be changed for a certain attribute), and right click the job. From the menu you select "Set reconciliation priority" and that takes you to Reconciliation Priority section but this time the job is "selected" and prepopulated and you can then set the priority for this job only in the Attribute Overrides. Then you could repeat a process for the other job(s).
Even though this works, it is confusing and not well documented, or it is not working as documented.
Am i missing something here? Did i make a mistake somewhere? Do you guys have any issues like me?
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487637721.34/warc/CC-MAIN-20210618134943-20210618164943-00541.warc.gz
|
CC-MAIN-2021-25
| 2,045
| 12
|
https://talentor.com/job/INSA18705-engineer-autonomous-driving
|
code
|
Engineer Autonomous Driving
You will be working in de Autonomous Driving Team of a well know technology company. You will have a impact on how people will travel in the future. Saving time, creating safety and delivering comfort. While the goal is automated driving, the road you will be driving your career on is far from predefined. You will overcome issues that are new for everyone. Learning on the job was never more mandatory.
Work in an agile development team on state-of-the art new technologies in the maps space targeting autonomous driving vehicles.
- Design and implement test automation framework and suits for all layers (incl. backend server application/map compiler and frontend web application) in the product based on the analysis of the system architecture and interfaces
- Analyze the requirements regarding test automation and select the best tools and configuration suiting those
- Support the integration and execution of automated tests in the CI environment of the project
- Analyze results and report on issues found. You are expected to identify trends in KPIs and quality as well
- Maintain test requirements traceability
- Min. 3 years of experience in Software development testing
- Experience with testing of Java-based server application running on public cloud platform and frontend web applications
- Experience with unit test frameworks like Junit or Gtest/Gmock
- Experience with BDD Frameworks such as PyTest-BDD or Cucumber
- Ability to select and/or build automated test frameworks and help with integrating these in the CI environment
- You’ll love to work in a team
- Understanding that you are working on undefined territory in some parts and that you need to apply forward thinking
As a plus
- Hands on knowledge of Java, TypeScript, Python and Selenium WebDriver
- Experience with product development
- Understanding of database technologies
26 vacation days;
A lease car of transport budget;
Home work setup (Laptop, monitor. Desk if needed);
Apply for this job
Does this job fit your talents and seem right for you? Don't hesitate to apply online now.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358078.2/warc/CC-MAIN-20211127013935-20211127043935-00494.warc.gz
|
CC-MAIN-2021-49
| 2,099
| 24
|
https://serverlogic3.com/what-is-dense-matrix-in-data-structure/
|
code
|
What Is Dense Matrix in Data Structure?
A dense matrix is a data structure used to represent a collection of values organized in a two-dimensional grid or array. It is commonly used in various computational tasks, such as linear algebra operations, graph algorithms, and scientific computations.
In a dense matrix, each element is stored in memory using a continuous block of storage. This means that every cell of the matrix corresponds to an entry in the memory. The elements are typically stored row by row or column by column.
Dense matrices have certain properties that make them suitable for specific applications:
- Efficient random access: Due to their sequential memory layout, accessing any element in the matrix has constant time complexity.
- Easy arithmetic operations: Dense matrices allow efficient arithmetic operations like addition, subtraction, and multiplication due to their contiguous storage.
- Inefficient for sparse data: Dense matrices are not well-suited for representing sparse data since they waste memory by allocating space for zero-valued elements.
Dense matrices find extensive use in various fields:
In scientific computing and numerical analysis, dense matrices are widely used for solving systems of linear equations, eigenvalue problems, and other mathematical computations. Their efficient access patterns make them ideal for performing complex calculations quickly.
Dense matrices are employed in graph algorithms like breadth-first search (BFS) and depth-first search (DFS). They enable efficient representation of graphs and facilitate quick traversal through adjacent vertices.
In image processing tasks, dense matrices are used to represent images as pixel values. Operations like filtering, transformation, and compression can be performed efficiently using dense matrix representations.
Dense matrices provide an efficient way to store and manipulate two-dimensional data. Their continuous memory layout enables fast random access and supports various computational tasks. Understanding the properties and applications of dense matrices is crucial for effectively utilizing them in data-intensive operations.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100545.7/warc/CC-MAIN-20231205041842-20231205071842-00167.warc.gz
|
CC-MAIN-2023-50
| 2,153
| 12
|
https://www.plexopedia.com/plex-media-server/general/fix-transcoder-crashed-or-failed-to-start-up/
|
code
|
There are many errors that can appear when Plex Media Server attempts to transcode a media file. Most transcoding errors that are displayed don't provide much information as to the cause of the error, so it is up to the server owner to try and resolve the issue.
An example of one such Plex playback error message is:
Conversion failed. The transcoder crashed or failed to start up.
If you get this message when streaming a movie that is being transcoded, then there are a few things that you can try to resolve the issue.
Check the location and permissions of the transcoder temp directory
Verify that the location of the temporary transcoder directory is in a directory that the user running the Plex server has write access. Plex writes out blocks of the media file as it transcodes so it can stream the file to the client machine, and these blocks are written to files in a temporary directory.
If Plex can't write to the directory, then this could cause this error. Ensure the directory is accessible and writeable by the user running Plex.
Delete the transcoder directory
If you have verified that the permissions to the temp directory are fine, but you still get the error, try deleting the transcoder temp directory. The directory will be re-created the next time Plex transcodes a media file.
Disable hardware acceleration
Some times a bad driver can cause the transcoder to crash, and result in a transcoder error. Try disabling hardware acceleration to see if the error is resolved. If it does resolve the error, try updating or reinstalling the graphics card driver, and then re-enabling hardware acceleration.
Change audio setting
The audio selection for the media file could cause the transcoder issue. Try setting the audio to stereo to see if that resolves the problem.
Windows users: check key or keyset
For Windows users, if you check the Plex logs on the server you may see messages that say the following:
Key not valid for use in specified state
Keyset as registered is invalid
The above messages indicate an invalid key or keyset, and Plex recommends trying the following:
- Stop and exit Plex Media Server so it isn't running.
- From Windows Explorer, delete the following folder:
- Restart the machine.
- Start the Plex Media Server.
Do the Plex dance
The Plex dance is known to resolve several issues with Plex, and can be tried if many options fail. This is one of those last-resort type of options as it does involve several steps that can take a while if you have a large library.
The following steps outline how to perform the Plex dance.
- Move all the files for the media causing the issue out of the media folder being used for the Plex library.
- Scan the library to detect the changes.
- Empty the trash and clean the bundles to ensure the media item has been cleaned from Plex.
- Verify the naming convention (TV show naming/movie naming) for the media files, and then move them back into the folder.
- Rescan the library a second time, and ensure the media item has returned to the library.
If the above three solutions didn't fix the codec issue, then the next step would be to try asking for solutions on the Plex forums.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816853.44/warc/CC-MAIN-20240413211215-20240414001215-00386.warc.gz
|
CC-MAIN-2024-18
| 3,159
| 31
|
https://ebuzz.ie/products/monty-python-the-movies-dvd-2006
|
code
|
A box set of 4 Monty Python classics. The first Monty Python film 'And Now For Something Completely Different' (1971) is a collection of some of their better-known television sketches, including the legendary 'Dead Parrot' sketch and the 'Lumberjack Song'. John Cleese, Eric Idle, Graham Chapman, Terry Gilliam, Terry Jones and Michael Palin make up the troupe. 'The Holy Grail' (1975) was their second feature. King Arthur and his trusty knights fearlessly (on the most part) travel the length and breadth of the country in search of the mythical Holy Grail. On their way they have to deal with the sarcastic taunts of the French Knight, the Knights who say 'Ni', Tim the Enchanter and the Terror of the Cave of Caerbannog amongst other things. 'Life of Brian' (1979) was banned in 17 countries on release. Set during Biblical times, the film tells the story of Brian (Graham Chapman), an accidental messiah whose life runs in eerie parallel to that of Jesus Christ. His misadventures come to the attention of Pilate, crucifixion inevitably follows, and the film ends with the infamous group rendition of the song 'Always Look on the Bright Side of Life'. Finally, in 'The Meaning of Life' (1983) the Monty Python team embark after the holy grail of human understanding with a series of sketches aimed at discovering the meaning of life itself. Includes the show-stopping anti-contraception musical number 'Every Sperm is Sacred', a schoolmaster's overly vivid demonstration of the facts of life for his students, and the infamous sequence in which a restaurant diner goes one mint too far...
Please refer to information provided on individual product pages for delivery and shipping time frames for your region.
When order has dispatched from ebuzz.ie, you will recieve a dispatch notification email. Please refer to Transit times below for expected delivery.
Orders are dispatched via An Post (Ireland's Postal Service) or Courier. If applicable, tracking details will be forwarded to you in your order dispatch notification email.
At Seasonal trading times i.e Black Friday & Christmas, some delays may occur, please allow an extra few days for delivery.
Rep of. Ireland - Free Delivery on all orders over €15. For orders under €15 standard delivery charges are from €2.50 - €4.50 depending on size and / or quantity of items ordered.
International Customers - Please refer to the shipping calculator when item(s) are added to cart for shipping rates to your region.
When an item is pre-ordered, it is held until the official release date (release date details can be found on individual product pages). If other items which are not pre-ordered are on the same order, the entire order will be held and dispatched upon release of the pre-ordered item(s).
Refunds & Returns:
ebuzz.ie complies with EU distance selling rules.
If you have changed your mind with your purchased product, you may return the goods to ebuzz.ie within 14 days of receipt.
Return to Address:
ebuzz.ie returns dept.
Unit B7 - B9 Calmount Business Park,
Rep. of Ireland
Faulty / Defective Goods:
For CD,DVD,4K, Blu-Ray & Vinyl
For Electrical items (Consoles, Laptops, Tablets, Phones and other high value items)
Contact us online - Contact Us
or email us at Customer Care
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401614309.85/warc/CC-MAIN-20200928202758-20200928232758-00310.warc.gz
|
CC-MAIN-2020-40
| 3,254
| 20
|
http://wiki.mozilla.org/Firefox/Projects/Tabcandy/Panorama_1_Debrief_Chat
|
code
|
Firefox/Projects/Tabcandy/Panorama 1 Debrief Chat
After Firefox 4 shipped, the Panorama team and other interested parties got together and discussed how things had gone. This is a transcript of that chat. You can also see the summary.
iangilman: So, welcome all to the official Panorama debrief!
iangilman: I figure this can be a free flowing discussion
iangilman: howdy zpao
mitcho looks to iangilman to get us started with some structure, a guiding question, etc.
iangilman: anant, beltzner, dietrich, ehsan, juanb, msucan, not_gavin, shorlander, please join us if you have something to add!
iangilman: yup... I think I'll just dive in
msucan: hello everyone
iangilman: So, one of the obvious things about Panorama was that it was largely an outside team, not part of the mozilla establishment
iangilman: I think this was good and bad
iangilman: it was good because we had fresh eyes, and we were focused on just this one feature
iangilman: in fact, I think without a dedicated strike team, a large feature like this would never get done
iangilman: It was bad because we had a steep learning curve, and we made a number of naive decisions that had to be reversed later
mitcho: more institutional support would have been beneficial, though
iangilman: right, and maybe a single insider to help us navigate
mitcho: there were disconnects both in terms of code (a lot of us were new to mozilla firefox dev, so there was a lot we had to dig around to find ourselves) and also in project management/support, e.g. a main reviewer, platform contact, etc.
mitcho: at least
pcwalton: my biggest issue with the front end in general is review bandwidth
iangilman: on the other hand, we did have a number of good "friends of Panorama", like ehsan, mardak, and zpao, who really made it possible
mitcho: pcwalton: +1
mitcho: iangilman: true
iangilman: .... but we had to drag them away from their real jobs
zpao: i think part of the issue with the disjoint team was the lack of involvement at meetings (firefox team, platform). i think it's something that if it were in front of people more often, it would have gotten better attention
kevinh: interfacing with the Moz hive mind should have been done more frequently, and earlier
kevinh: not just for engineering, but across the board. In some sense, we played catch-up a lot.
pcwalton: I think all sub-projects should have a reviewer who can review things before they make it into at least a branch
pcwalton: patches sitting in bugzilla for months is the worst of all possible worlds IMHO
iangilman: zpao: good point! I think part of our problem is we didn't even know about those things
mitcho: we also weren't active in dev community involvement, like on planet.
pcwalton: get 'em checked in somewhere, even in a project branch
iangilman: it might be interesting to have a "so you want to start a feature strike team?" FAQ
mitcho: iangilman: good point
raymondlee: iangilman: yes, agree.
shorlander: I agree this is valuable insight for any future projects like this
zpao: definitely (for the team doing it and also for the people who should pay attention to the strike team)
mitcho: also, for me at least, it was unclear early on what exactly the hurdles would be in getting this in the product. whether this was really in-scope, whether we would have the support from the main product team, what the review and testing hurdles were, etc.
iangilman: pcwalton: getting me set up as module owner, so I could review patches, was a huge turning point
mitcho: early on, it was unclear how likely or unlikely it was that (a) we would ever land and (b) it wouldn't be pulled out
pcwalton: yes, that was really good
pcwalton: I wish the Web Console had had a module
iangilman: speaking of FAQ, I think our team wiki page worked out well: https://wiki.mozilla.org/Firefox/Projects/TabCandy/Work
kevinh: I see this conversation being echoed on the moz-dev planning group, but contributions from the community were really crucial for us
zpao: mitcho: that was an issue. partly because of lack of communication, but also because the schedule as a whole was in flux
kevinh: Making that process easier and more encouraged seems like both a good idea for strike teams and moz in general
mitcho: I feel that, if it was clear that this was a stated goal of fx4, a little earlier, within the firefox team, ux, and to dev/community communications, it would have been easier to get people on board and help
juanb: I don't know if there was a an assessment of the scope of the project in the beginning, which would have included people in the platform and front end team to sort out the doable vs future work within the timeframe of fx4.
iangilman: mitcho: indeed... it seemed like a couple of months of trying to land and not really knowing how far along the process we were... it was a spinning cursor, not a progress bar
mitcho: even having particular milestones which would be required for inclusion would have been helpful
juanb: I don't know that there was a set criteria of memory and performance metrics we could have used to check that we were doing ok, but that is also true of other projects.
mitcho: well, it seems that there are some criteria like that... don't hurt talos, don't explode memory, don't break things, etc... but that wasn't immediately clear at the beginning.
kevinh: seems like what I'm seeing is 1. we need better communication, preferably with a team member in MozHQ, 2. We need better road-mapping and planning, 3. We need better information on how to onboard people to the process. Most of that says PM to me, though I could be biased
ttaubert: hey everybody!, sorry I got stuck in traffic
iangilman: of course, if there had been too much thought put into whether Panorama should be part of Fx4, it never would have happened
mitcho: ttaubert: welcome!
kevinh: ttaubert: hey
raymondlee: ttaubert: hi
mitcho: iangilman: true.
iangilman: ttaubert: glad to have you here! I'm particularly interested in your perspective on how you found the project and getting up to speed this late in the game
mitcho: iangilman: and to a certain extent it did take actually landing and then iterating to really convince more people that this was worthwhile, or so it felt
iangilman: Right... Panorama really only happened because we were so passionate we blew past all of the road blocks
msucan: i am working on some patches for devtools, so not very active in here, but my comments would be:
kevinh: and had a decent proof-of-concept add-on
raymondlee: Yeah, we didn't really know whether it would be in Fx 4 or not right at the beginning.
iangilman: right! Having a prototype people could see in action was huge for convincing people
raymondlee: the proof-of-concept add-on really helped.
iangilman: (perhaps aza will have more to say about that when he's back from his other meeting)
msucan: i think panorama's code is really nice. i dived into the code really late in the process, to help with a blocker, still it was easy to understand and dive into. i like the code, the structure, but it felt somehow too different from the code style of other mozilla js code
iangilman: msucan: excellent point... I actually find the mainstream mozilla code base really difficult to wrap my head around... h
msucan: panorama's code felt more like a typical web page
iangilman: ... so what's better? Having good clean code, or matching the rest of Mozilla?
msucan: with lots of js
iangilman: right... no xul
mitcho: on the other hand, though, I think there are issues with the way panorama is integrated with tabbrowser which happened *because* we built it first as an add-on and then a possibly-removeable addition to the browser
msucan: iangilman: i think clean code does not exclude matching the rest of Mozilla
iangilman: mitcho: good point... like what?
mitcho: i.e. the notion of "tab groups" should really have been a real xul/js construct in the tabbrowser level, ideally
mitcho: we still don't have that... only "hidden"
msucan: just follow code style guidelines. same structure, everything. just indent the same, use arguments with "a" prefix, and so on
mitcho: and that seems like a real pain point down the line, and is a clear reflection of this being developed as an "addition"
msucan: break lines like the rest of the code does
fryn: mitcho: as someone who works on tabbrowser code, i concur with you.
msucan: wrt. tab browser. it feels like panorama duplicates some stuff from tabbrowser
iangilman: msucan: I guess I don't really know the details on how our code is different. We did look at the coding guidelines and tried to follow them. This was, however, a while after we got started
mitcho: if, instead, the whole firefox team made it clear that panorama (or some kind of group functionality) was going to be a first class citizen, we would have built in groups in tabbrowser + sessionstore + sync early on, and then built the interface on top of that, like it should have been, imho
msucan: (but i didn't dive into "what could improve")
zpao: mitcho: i still feel the same way about sessionrestore. many issues came up because it wasn't tightly integrated
fryn: we may want to rewrite tabbrowser to be better modularized and be designed from the ground up to support the integration of features like panorama.
msucan: iangilman: for me it was confusing. you don't use lets, and so on. i had to change "my mode" for panorama
ttaubert: mitcho, right, it's a too defensive integration at the moment
kevinh: mitcho: I agree it wasn't ideal, but there IS a bootstrapping problem. I am not sure the Fx team would let in a project of such magnitude without first being proofed out
msucan: fryn: exactly
pcwalton: all I can say is that I'm glad that Panorama is a series of .js files and not a .xml like tabbrowser.xml
msucan: haha, me too
pcwalton: panorama code feels more like "modern" JS code
pcwalton: is unsurprising, given that it is... well, much newer
pcwalton: I like iQ. mad props to iQ.
kevinh: iQ is funny, some people really didn't want us to do it
mitcho: jeresig thought I was a little crazy
mitcho: when I told him about it
iangilman: So obviously there are things Panorama still needs to take from mainstream mozilla
iangilman: ... and sounds like there may be some things Panorama can contribute
fryn: because of panorama's uintentionally obfuscated hooks into tabbrowser code, i find that i often run into bugs due to how panorama overrides and changes the behavior of some getters in tabbrowser. i'll likely arrange a meeting or two to discuss that once i have more concrete ideas on how to address this problem.
iangilman: like maybe there will be more html/js modules rather than xul? Maybe iQ could be used elsewhere?
mitcho: can we talk a little more about the integration of panorama before we get to good things about the process?
mitcho: (iangilman: though I agree with what you're starting to say...)
iangilman: I really like the idea of the core tech that makes Panorama possible being built into the guts of the browser (unlike how it is now) and the presentation of Panorama being just a thin layer of html/js that only reacts to and guides that core tech
iangilman: mitcho: sure... lead us
mitcho: so... (this sort of is counter to what I was just saying about better tabbrowser integration, but...) I think it was a mistake to not land it with a very easy code kill switch.
mitcho: especially as we didn't know whether it would really ship, even after landing
mitcho: there was a lot of community backlash at points, which may have been more easily alleviated if we had an about:config setting, at least.
kevinh: I actually think not caving to the killswitch was pretty key for us, as the reviews for us (on twitter, since launch) have been overwhelmingly positive
pcwalton: I think some of that was also due to a slightly overzealous land grab of keyboard/mouse shortcuts
pcwalton: most notably three-finger swipe
kevinh: the problem wasn't that Panorama was IN Fx, it was that it was broken, and too easy to trip on. I think we eventually struck a close-to-right, conservative balance on that
pcwalton: perhaps keyboard shortcuts should be phased in gradually as the product matures
mitcho: but for future projects, say
mitcho: what if we never improved the performance? for example
pcwalton: to avoid a backlash resulting from people accidentally triggering a feature that isn't ready yet
mitcho: it would have been a pain if months later we had to back the whole thing out
iangilman: looks like kill switches are going to be required for future features, for what it's worth (if you've been following moz-dev-planning)
ttaubert: sounds reasonable
raymondlee: should be easy to implement.
kevinh: The new structure might be better for new features. Iterate and polish on a separate branch until it is ready to hit m-c. Hopefully some perf work is part of that
mitcho: (not that we necessarily want to keep code "latent"... lord knows there's also other code which is in trunk but disabled... *cough*tabpreviews*cough*)
mitcho: iangilman: yeah
kevinh: pcwalton: very much so
zpao: (yea i think there's been talk of taking that out)
zpao: (though people us it so...)
mitcho: another thing on process:
iangilman: ttaubert: you're quiet today... what are your thoughts?
mitcho: starting with an add-on was very good for prototyping it for starters, but moving it from an add-on's "shell" into trunk, I think, also is a reason for why the final product is so "compartmentalized" in a sense
ttaubert: I'm reading and agreeing so far I slipped later into the whole process so that's not all new to me but some things are
mitcho: perhaps starting from a branch rather than from an add-on would have been better? though harder to get people to try it and get excited.
kevinh: maybe a tiered process, addon > branch > m-c
ttaubert: are branches contrary to kill switches?
mitcho: kevinh: that's essentially what we did, though
mitcho: ttaubert: perhaps.
raymondlee: mitcho: but would it be hard for people to try it if we were not an add-on
iangilman: mitcho: that gets into the question of whether we should have thrown out the prototype and written the integration from scratch
mitcho: ttaubert: but there's still a lot more you can do in a branch in terms of code organization that you can't with an addon
iangilman: .... thoughts on that?
mitcho: iangilman: I say yes.
mitcho: iangilman: probably starting from tabbrowser on up
kevinh: iangilman: it sounds like, for our needs, Fx's code should be easier to get TO from an add-on
ttaubert: I don't know the prototype but this makes sense
iangilman: I think I agree, although there's no way the existing team could have done that... we would have needed a dedicated "man on the inside"; probably multiple
mitcho: ttaubert: you should go find it. it's kind of fun.
mitcho: iangilman: true.
ttaubert: mitcho, ok
iangilman: ... in which case we would have been gated behind people's existing priorities, and it never would have happened
mitcho: ttaubert: don't think it'll work with fx4, though.
ttaubert: so at least in an ideal world we should have thrown away the prototype
mitcho: iangilman: unless, as discussed above, we actually had an employee or two dedicated to the project early on
mitcho: the prototype was indeed great for getting people excited and mocking this up, though
mitcho: kudos to aza!
aza: Man, this is a lot of scrollback.
iangilman: mitcho: I finally upgraded my primary browser to Fx4 now that it's released, and guess what? There was an ancient copy of the add-on sitting dormant in my profile... freaked me out!
ttaubert: (*sshhh* aza is reading
mitcho: hey seandunn !
seandunn: iangilman: good afternoon, sorry I'm late, long lunch.
seandunn: mitcho: hey!
mitcho: iangilman: hehe, maybe you can use that one too and get panorama-in-panorama, infinite loop style.
iangilman: seandunn: glad you could make it! We're still at it
raymondlee: hi seanunn
ttaubert: hey seandunn
raymondlee: seandunn ^
seandunn: raymondlee, ttaubert: hello!
iangilman: yeah, having a prototype is definitely a good idea, even if you do throw it away
iangilman: the prototype allowed us to do all sorts of crazy experimentation at the beginning
ttaubert: I think that's a good way to try yourself out
mitcho: also in terms of integration and code flow... I think it would have been beneficial to start writing mochitests earlier, like when we started to move to a branch. looping in the mozmill/qa folks earlier would also have been nice, though a lesser priority, imho.
kevinh: ... which was easy enough to do that just iangilman and aza could riff on varied ideas at a rapid pace
mitcho: (although our code coverage is not bad!)
aza: Absolutely, an internal champion is necessary. Better integration with the Firefox team is right as well (my fault, really, for not being more on top of that). Trying to formalize the inclusion process would have been nice, but I don't actually think the Firefox team knew what those metrics are. Do, and ask forgiveness later, is the right way to go.
aza: I would have liked to get in front of the perf people earlier.
iangilman: right... if things get too formalized, then this sort of big feature may never make it
iangilman: it's a delicate balance
aza: Although the goals around less than 1% impact on Ts was really helpful.
iangilman: Now that aza's back, I'd like to call out another important factor: Aza's evangelism, especially the blog posts with videos
iangilman: I think getting such a huge public buzz on the outside helped us get action on the inside
mitcho: it was great, but I wish we'd done a little more on that.
iangilman: Also, I believe ttaubert found us because of this?
ttaubert: iangilman, yep I just wanted to say that
aza: With one caveat: my biggest learning this time around is never ever use public buzz as a way of trying to directly influence key decision makers within Mozilla. That bounces really poorly. Did that with Ubiquity to general badness. Let this buzzosphere speak for itself this time and it worked much much better.
mitcho: when I was working on ubiquity previously, I was encouraged to blog often about progress and thoughts, and that was pushed to planet, and a lot of people within the organization could latently follow progress that way
iangilman: Also the tweets with links to "good first bug" lists... seemed like a good idea
iangilman: ... not sure how much impact that had
mitcho: iangilman: I think it was good, though b.m.o is kind of scary, which is unfortunate.
aza: Although we let the "how to get involved" webpage languish, at the beginning it really helped with rallying. Similarly, not using Bugzilla to triage and plan at the beginning helped a lot too.
iangilman: aza: you mean never directly say "we have to do this, look at the buzz!"?
aza: iangilman, exactly
iangilman: mitcho: b.m.o?
aza: b.m.o = bugzilla.mozilla.org
iangilman: yeah, knowing at what point in the project to engage with the higher overhead of mozilla process
kevinh: probably when we graduated from 2-3 people to 4-5 people, actually
iangilman: oh, here's another difference in the Panorama codebase... we used naturalDocs-style comments, and actually used them to generate a help page: http://hg.mozilla.org/labs/tabcandy/raw-file/tip/content/doc/index.html
aza: Did other people find that useful in getting into our code?
iangilman: ... did anyone find those pages helpful? ttaubert? seandunn?
ttaubert: it's the first time I see it
ttaubert: I'm quite used to self-documenting code
pcwalton: I never saw it
pcwalton: but that's cool
ttaubert: I appreciate it too, just didn't know it existed
seandunn: I never used it, although I probably would have ended up just looking at the code anyhow.
iangilman: fair enough
iangilman: I think it was useful for the original reviewers and the QA folks... aza?
mitcho: again, though, as nice as that was, it may have been beneficial in the long run to follow mozilla's commenting guidelines
aza: I wonder if the docs and adherence to having our code documented thusly held us to a higher standard.
iangilman: yes, sounds like it
aza: And yes, I think it was useful when brain dumping to our original reviewers.
aza: And moving forward, I'd assume it will help with modders
seandunn: iangilman: What I would have found much more helpful is a description on every class that, in detail, described all other classes it interacted with and how it was used. But, only when I was trying to grok everything.
iangilman: yeah, some sort of comment documentation in the code is good, but I think using the mozilla standard would be fine (though I think it would be great if the mozilla comment docs actually got turned into web page documentation)
aza: Another thing I found that the team did really well (especially coming from outside as we did) was form personal relationships with particular FX team members. That, I think, was absolutely necessary to the success of Panorama.
iangilman: hmm... I suppose we should have invited the read it later guy to this meeting... he was our add-on early adopter... would be interesting to have his perspective
kevinh: I've been engaged with him some of the last month or so
kevinh: he says our code was easy to dive into, and I believe, that the docs helped. He whipped up his Panorama integration in, what? A day?
iangilman: aza: how so?
iangilman: Like all the early help from mardak? and ehsan and zpao ?
iangilman: speaking of docs, we still don't have any official MDC docs for Panorama, nor do we have an API other than what naturally evolved
mitcho: creating the simple add-on sdk module for panorama integration was easy too (of course, that was me, though), though that's been on hold
iangilman: getting that landed will be great
mitcho: it would
iangilman: seandunn: would you like to talk about the review process? One of the things we've been discussing is the challenges of being sort of "outsiders" trying to make this happen
aza: iangilman: In terms of getting review. If we hadn't known dolske, ehsan, etc., then our crazy insane asks would have fallen on deaf ears.
iangilman: aza: for sure
raymondlee: Getting review was really a bottleneck of the whole landing process.
iangilman: speaking of which, #tabcandy has been crucial, and the fact that there are a number of browser core folks who are willing to just be here
seandunn: Any frustrations I had with the review process are in two areas: 1. The combination of lack of documentation on the "proper" way to do something combined with the iterative pain of satisfying a reviewer, and 2. the constant churn of code.
iangilman: Part of that bottleneck in the original landing was the fact that we had this gigantic wad of code... it would have been great to get the reviewers involved earlier.
iangilman: right... we had a number of occasions where we were review bound or approval bound, and a huge pile of patches would build up... when we finally got approval, everything was bitrotted
iangilman: I suspect these are issues for all of Firefox, not just Panorama, but definitely worth mentioning here
pcwalton: that's why I like project branches
iangilman: In someways we probably had it better than some other parts of firefox; at least we had advocates
pcwalton: having patches sit in bugzilla for a long time is a big drag on productivity
ttaubert: yep, I find it kind of problematic that waiting for review produced a big amount of context switches for everyone involved and reduced productivity everywhere
aza: Another complaint: One of the things I liked about early Tab Candy dev was that we were constantly fixing things and getting to a place of awesome dogfoodability. I.e., we were always polishing. As we got to the rush for features by feature freeze in FX that went by the wayside and the product suffered for it.
aza: ttaubert: *nods*
pcwalton: if they're landed *somewhere* then you have history, you have builds that people can try out, you have an ordering to prevent bitrot, you can take merges from trunk in an orderly fashion, etc.
pcwalton: for JS, anyone on the JS team can review patches before landing into tracemonkey
pcwalton: regardless of official module owner/peer status
pcwalton: it works great for them
iangilman: pcwalton: there's two ways you could take a project branch... review everything before landing there, so it's not a huge review process to land it on m-c, or review it afterwards; I can see advantages to both. for a big feature like Panorama (and perhaps even for the upcoming "all windows" work), it's possible you might even want two such branches, one for each. Or maybe that's just crazy
iangilman: pcwalton: sounds cool
iangilman: aza: we kind of ended up in "ship emergency" mode in september and stayed that way for months...
kevinh: our triaging was, like all of Fx, pretty brutal
iangilman: part of our challenge there was incompatibility with new things that had just landed in Firefox, like app tabs and sync
mitcho: iangilman pcwalton: I agree, the tracemonkey approach is interesting
iangilman: ashughes, Mil, philikon: anything to add, or just observing?
iangilman: seems like we're winding down here
kevinh: One thing we could have done earlier was start to plan for transition. We're at a stage now where it's hard to say who will be working on Panorama.
ashughes: iangilman: I wasn't paying attention, to be honest
iangilman: good point!
mitcho: kevinh: +1
iangilman: ashughes: no worries... just curious if you have any thoughts about what went well or poorly in the process, from where you sit
philikon: iangilman: sorry, was just observing
kevinh: and in general, should walled-off projects like Panorama get integrated into the greater moz-dev community?
juanb: From a QA perspective and probably others, it would have been the awesome sauce if we had clear use cases. While watching the tabcandy component in bugzilla I often felt like we were defining behavior along the way, which is fine because things change, but I'd get a little lost.
philikon: but i agree with project branches!
kevinh: ... or should we keep a small dedicated team?
juanb: "how should tab candy work in private browsing?" and then I'd just try to use common sense.
iangilman: juanb: I think we didn't know either! Probably having a common place we could continue to evolve the "spec" would have helped
ashughes: to carry off of juanb thoughts, I'd like to add that while that is true, I found developer response on bugs to QA requests was both timely and efficient
mitcho: iangilman: +1 on spec
juanb: Dev response was great.
iangilman: philikon: no worries. I'll take that as a sign that we didn't aggravate you too much during the last year
juanb: Without you having that much of mozilla experience, you were very interactive here and in bugzilla.
iangilman: ashughes, juanb good to hear!
iangilman: So having well-defined points of team accessibility is good
iangilman: How about our weekly "scrum"? Did the team find that helpful? It was pretty casual...
raymondlee: yes the weekly scrum is a good thing to know what others are doing.
mitcho: twas good
raymondlee: especially, I am in different time zone
iangilman: raymondlee: any other thoughts on how we did with the timezone thing?
ttaubert: I like(d) it, too
iangilman: ttaubert just solved it by staying up all night
ttaubert: sad but true!
iangilman: Not really a viable, long-term, plan, though
ttaubert: but actually working in a different timezone is not that bad
ttaubert: if you've got a todo list you can work on that over the day
mitcho: the timezone, and the remote-ness, forced us all to make sure to communicate well via scrum and bugs
raymondlee: yes, a todo list is a good thing.
ttaubert: the last 2-3 hours of the day can be spent with team communication and getting some reviews
mitcho: of course it would have been great to be all in the same room :), but there's an upside
ttaubert: so you know what to start with the next day
iangilman: yeah, I really liked how the project was moving forward 24 hours... I'd always wake up to a pile of new stuff
iangilman: So, mitcho has suggested we also use this time to look forward to next steps as well... sound good to folks? Any other backward looking items anyone wants to cover first?
raymondlee: One thing I like it's the bugzilla emails so I could read all those emails in the morning and knowing what happened while I was sleeping.
mitcho: raymondlee: same here.
ttaubert: yep that's great
mitcho: often from bed
iangilman: Yeah, I have to say bugzilla is horrifically ugly and clunky, but I've grown to love it
iangilman: ... and the email is a great way to know what's going on
kevinh: which I think speaks to how useful it is to have smaller, modularized, features
mitcho: atul's bugzilla dashboard didn't hurt
kevinh: we didn't catch the firehose from the rest of bugzilla, we only got Panorama stuff
iangilman: alright... let's call the debrief to a close... thank you for all the great thoughts! Kevin and I will distill this chat down and add it to the wiki, probably also a planet mozilla blog post
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100650.21/warc/CC-MAIN-20231207054219-20231207084219-00540.warc.gz
|
CC-MAIN-2023-50
| 29,015
| 298
|
https://medium.com/@taylordenouden
|
code
|
I wanted to detail here what I did to get tensorflow-gpu working with my fresh Ubuntu 18.04 LTS install. NVIDIA doesn’t have any official downloads for Ubuntu 18.04 yet, but you can get things to work with the available files for Ubuntu 17.04.
The first thing you should check is that you have an Nvidia driver installed for your graphics card. Your graphics card must support at least Nvidia compute 3.0 to install tensorflow-gpu.
You can check what graphics driver you have installed with the
nvidia-smicommand. You should see some output like the following:
If you don’t have a proper driver…
Machine Learning Masters Student at University of Waterloo. I’m interested in generative models, anomaly detection, and machine learning safety.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487629209.28/warc/CC-MAIN-20210617041347-20210617071347-00294.warc.gz
|
CC-MAIN-2021-25
| 748
| 6
|
https://www.microsoft.com/en-us/research/publication/entity-categorization-over-large-document-collections/?from=http%3A%2F%2Fresearch.microsoft.com%2Fapps%2Fpubs%2Fdefault.aspx%3Fid%3D74047
|
code
|
Extracting entities (such as people, movies) from documents and identifying the categories (such as painter, writer) they belong to enable structured querying and data analysis over unstructured document collections. In this paper, we focus on the problem of categorizing extracted entities. Most prior approaches developed for this task only analyzed the local document context within which entities occur. In this paper, we significantly improve the accuracy of entity categorization by (i) considering an entity’s context across multiple documents containing it, and (ii) exploiting existing large lists of related entities (e.g., lists of actors, directors, books). These approaches introduce computational challenges ecause (a) the context of entities has to be aggregated across several documents and (b) the lists of related entities may be very large. We develop techniques to address these challenges. We present a thorough experimental study on real data sets that demonstrates the increase in accuracy and the scalability of our approaches.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689490.64/warc/CC-MAIN-20170923052100-20170923072100-00268.warc.gz
|
CC-MAIN-2017-39
| 1,053
| 1
|
http://gis.stackexchange.com/questions/tagged/buffer+distance
|
code
|
Does anyone know the algorithm of buffering? I had difficulty in drawing buffers. So I tried many values to acquire 1km buffer. I found value 0.008966 produces 1km buffer. Then I calculated inverse ...
In QGIS, I am creating a few different buffer around points to simulate walking distances. For example, 10 minutes walk about 800m at 5km/h. Using population density, I then clip a population layer ...
I am about to become desperate... I like to create buffer around several points - if possible - with a declaration of the size in km of these buffers. Just to show you what I am working with: My ...
I am trying apply a positive offset/buffer (enlarge a shape equally from the borders) to a polygon that is made up of a series of latitude/longitude coordinates by a distance that is defined in either ...
I have an sdc file with water bodies, i.e. oceans-rivers-lakes. I want to combine it with a shapefile with the boundaries of countries all over the world and produce two measures: 1) The average ...
in a FeatureClass I have a lot of polygons which contain attributes of different landuse-types. Now I am trying to dissolve those polygons of the same landuse-type, but only if the distance between ...
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657120974.20/warc/CC-MAIN-20140914011200-00295-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
|
CC-MAIN-2014-41
| 1,208
| 6
|
https://blog.stoa.org/archives/487
|
code
|
A message from G. Schwendner (Wichita State University):
I am putting up a weblog to keep track of new publications, announcements etc. in papyrology (my field). We see these, for the most part, on the Papy-list, but the archives are resticted to list members, and so its news does not flow very far. Most important, it does not get into the seach engines. Anyway, below is the url; let me know what you think. I have not had the time to reformat the Greek that appears in the tables of contents, but the publishers’ pdf versions are linked, and few, I think, are searching the web for unicode Greek text at this point.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506686.80/warc/CC-MAIN-20230925051501-20230925081501-00267.warc.gz
|
CC-MAIN-2023-40
| 621
| 2
|
https://www.rsaconference.com/experts/sean-barnum
|
code
|
Sean Barnum is an Information Security Principal at The MITRE Corporation where he acts as a senior advisor to the U.S. government and as a technical architect and community leader for various information security standardization efforts including STIX, CybOX, CAPEC, MAEC, CWE and SAFES among others. Barnum has a broad base of over 25 years of experience in the software and technology industry. He is a frequent contributor, speaker, trainer and author on information security topics. He is coauthor of the book “Software Security Engineering: A Guide for Project Managers”, published by Addison-Wesley. He is involved in the information security related standards efforts of ISO, OMG and IETF, among other international standards bodies.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100290.24/warc/CC-MAIN-20231201151933-20231201181933-00648.warc.gz
|
CC-MAIN-2023-50
| 745
| 1
|
https://dev.motionographer.com/2008/03/25/aeny-meeting-thursday-march-27th/
|
code
|
I won’t be able to make this meeting (I’ll be packing for my move up to The Big Apple), but I encourage you all to go. February’s meeting was packed, and it’s invigorating to feel that kind of nerdy energy. I picked up a few great AE tips, too. Oh, and they have free pizza and soft drinks (which is reason enough for most people to attend).
More details on the official AENY site.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510149.21/warc/CC-MAIN-20230926043538-20230926073538-00713.warc.gz
|
CC-MAIN-2023-40
| 389
| 2
|
https://tomsondev.bestsolution.at/2012/01/16/svg-for-fxml-conversion/
|
code
|
So some weeks ago I read Jasper Potts blog on FXG to FXML-Conversion tool (useing simple XSL-Stylesheet). This blog Jasper asked if there’s probably one day someone providing a conversion tool for SVG.
This has been on my list for some time now and because I was traveling a bit in the last 2 weeks I had some time to hack on such a converter. I’ve not used XSL but instead used xtend (svg-parser (java),fxml-converter (xtend)) (because an XML-File comes with a lot of multiline strings). It took some time to wrap my head around SVG and how I could translate this in FXML/JavaFX 2.0 API calls but I finally I’m at a point where I have something to show off:
Please note that the JavaFX 2.0 image on the left is only made up of primitive JavaFX 2.0 elements (Circles, SVGPaths, Rect, Gradients, …) and it by far does not yet handle all the nifty stuff one can do with SVG but it shows me that such a conversion tool is doable to some extend though e.g. when it comes to filtering, … SVG has much more definitions than the JavaFX-API currently provides.
I was not even able to convert the SVG-Gaussian-Blur to JavaFX’ Gaussian-Blur (but that might just me I’m not an expert in graphics stuff), SVG allows to apply multiple filters/effects to a node, … . So to provide full SVG-Support – without residing to libraries like Batik – in JavaFX 2.x we’d need more API (or at least a tutorial how to e.g. write custom effects) but even with the current API one can get quite far as you don’t notice any missing thing in the above screenshot.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710421.14/warc/CC-MAIN-20221210074242-20221210104242-00210.warc.gz
|
CC-MAIN-2022-49
| 1,558
| 4
|
http://books.nap.edu/openbook.php?record_id=13430&page=185
|
code
|
weigh the costs of moving to it. With the experience, successes, and lessons learned of the past decade, the climate modeling community is positioned to accelerate infrastructure adoption. Cross-laboratory intercomparisons are now routinely conducted and, more importantly, are the way forward. End users require climate model information to be robust and reliable. Common infrastructure improves the ability to enforce scientific methodology (e.g., controlled experimentation, reproducibility, and model verification) across institutions and is one of the primary building blocks of that robustness and reliability.
So far, no one software framework has become a universal standard, because modeling centers that initially invested in one framework have had insufficient incentive to switch to another. Nevertheless, we believe that two critical strategic needs—that the U. S. climate community needs to more effectively collaborate, and that it needs to nimbly adapt to a wave of disruptive new computing technology—position the community for a further unifying step. The vector to parallel disruption led to widespread adoption of framework technologies at the scale of individual institutions. The climate modeling community can now conceive of a framework that could be subscribed to by all major U.S. climate modeling groups, supports a hierarchy of models with component-wise interchangeability, and also supports development of a highperformance implementation that enables climate models of unprecedented resolution and complexity to be efficiently adapted to new architectural platforms. This idea is explored below.
Finding 10.5: Shared software infrastructures present an appealing option for how to face the large uncertainty about the evolution of hardware and programming models over the next two decades.
A NATIONAL SOFTWARE INFRASTRUCTURE FOR CLIMATE MODELING
Very complex models have emergent behavior whose understanding requires being able to reproduce phenomena in simpler models. Chapter 3 makes a strong case for hierarchies of models adapted for different climate problems. From the computational perspective, some model types can be classified by a rough pace of execution needed (i.e., model simulated time per computer clock time) to make efficient scientific progress:
• process study models and weather models (single component or few components; dominated by “fast” physics; 1 year/day),
• comprehensive physical climate models (ocean-atmosphere, land and sea ice,
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645171365.48/warc/CC-MAIN-20150827031251-00140-ip-10-171-96-226.ec2.internal.warc.gz
|
CC-MAIN-2015-35
| 2,506
| 7
|
https://www.spigotmc.org/search/194110689/
|
code
|
Want a better Minecraft server? Read about SpigotMC here!
Separate names with a comma.
This is where my open Inventory code, I put it in another file. I tried to use HashMap but then I realized I always create new inv, are there...
Well, I did it before and there's no error on console but in Minecraft when I use "/tpp" and "/setp" it says "An internal error occurred while...
You can watch this video, sorry if I records on my laptop, it will be very laggy. I will explain the video, "/tpp" to open Inventory and "/setp...
There's no errors on my console, it just when i add new item to a empty slot, it replace the frist item in that inventory.
No, it doesn't work for me too.
I tried it but the item just replace it self instead of stay in next empty slot
Thanks for your answer but it doesn't work, I tried getItem == null too but it still not work, here's my code.
I think so but I has no idea where did I mess up.
Here's my full Inventory code
Hi, I'm trying to find the next empty slot to put my item in it but it seem doesn't works, i tried inv.fristEmpty() too but it didn't work either....
I tried it but it replace it self instead of add it to new slot.
I want to create a new boss which has the same summon type of the Wither like when you place 3 wither skulls on top of 4 soul sands the Wither appears
Hi guys, is it able to make a boss when i place 4 soul sands and 3 wither skulls on top of it like the Wither?
Hi, I made a custom Inventory that stores my teleport points if i use command /setp <name>. But when I checked if the next slot if empty it...
Oh, thank you.
It's works but this line of code doesn't work yet.
I created a custom Inventory and I want to disable player to taking item out of it but it seems doesn't work. And there is no error on the...
Oops thanks you
Hi, i want to create Vector for my arrows but this happended
Hi, I'm trying to create and open Inventory by a command but it keep show the error below. I added the command to onEnable and my yml file....
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585507.26/warc/CC-MAIN-20211022114748-20211022144748-00631.warc.gz
|
CC-MAIN-2021-43
| 1,998
| 22
|
https://simonwillison.net/2004/Jun/9/backporting/
|
code
|
Backporting from Python 2.3 to Python 2.2
9th June 2004
We have a home-grown templating system at work, which I intend to dedicate an entry to some time in the future. We originally wrote it in Python 2.2, but upgraded to Python 2.3 a while ago and have since been evolving our code in that environment. Today I found a need to load the most recent version of our templating system on to a small, long neglected application that had been running the original version ever since it had enough features to be usable.
Unfortunately, this application was running on a server that only had Python 2.2. Installing Python 2.3 would have been somewhat more painful here than on other servers we run for reasons I won’t go in to, so I decided to have a go at getting our current code to run under the older Python version.
In the end, I only had to make three minor changes, all at the top of the file in question.
from __future__ import generatorsas the very first line of the file. We use generators (with the
yieldstatement) in a few places—this feature was only properly added in Python 2.3, but was made available in Python 2.2 as a “future enhancement” through the aforementioned obscure import.
True, False = 1, 0on the next line down. Surprisingly, Python 2.2 had no support for a boolean type and instead used a test for non-zero. The above line defines constants that behave enough like Python 2.3’s True and False to avoid any problems.
I defined an
enumeratefunction, which was introduced for real in Python 2.3. Here’s the code I used:
def enumerate(obj): for i, item in zip(range(len(obj)), obj): yield i, item
All in all it only took around ten minutes to put the above together, after which the script worked just fine. It was interesting to see how our code had grown to rely on Python 2.3 features without us realising it.
Update: Check this entry’s comments for improvements to the above code snippets.
More recent articles
- Weeknotes: datasette-enrichments, datasette-comments, sqlite-chronicle - 8th December 2023
- Datasette Enrichments: a new plugin framework for augmenting your data - 1st December 2023
- llamafile is the new best way to run a LLM on your own computer - 29th November 2023
- Prompt injection explained, November 2023 edition - 27th November 2023
- I'm on the Newsroom Robots podcast, with thoughts on the OpenAI board - 25th November 2023
- Weeknotes: DevDay, GitHub Universe, OpenAI chaos - 22nd November 2023
- Deciphering clues in a news article to understand how it was reported - 22nd November 2023
- Exploring GPTs: ChatGPT in a trench coat? - 15th November 2023
- Financial sustainability for open source projects at GitHub Universe - 10th November 2023
- ospeak: a CLI tool for speaking text in the terminal via OpenAI - 7th November 2023
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100912.91/warc/CC-MAIN-20231209134916-20231209164916-00357.warc.gz
|
CC-MAIN-2023-50
| 2,794
| 24
|
http://lovegoodluna.blogspot.com/2013/09/all-books-by-dan-brown-dan-brown-ebook.html
|
code
|
Here are all the books by Dan Brown... I wish to share this little collection from my library with all of you...
However.. let me remind that I DONT OWN any of the books (as in this ebooks are not created by me)... I simply looked for them to read them and that caused me some trouble. Therefore, I feel that I could share them here with all of you, so that it becomes easier for someone else who wishes to read them..
Further, I take it as an initiative to motivate others to read! Often we dont read due to lack of resources or we dont get the right content or the content to hold us. I am not saying that I will post everything but yes, I will post everything that I have and try to post more and more books!!
Enjoy Reading :)
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591150.71/warc/CC-MAIN-20180719164439-20180719184439-00475.warc.gz
|
CC-MAIN-2018-30
| 729
| 4
|
https://inthirdperson.com/2022/07/02/triumphant-return-mario-strikers-battle-league-live-stream/
|
code
|
Finally, Mario and company have returned to the pitch for some footy! But is Mario Strikers: Battle League the triumphant return we’ve been hoping for?
View the full post to see the full stream, highlights, and shoutouts!
- Thank you crashbandicootfan22 for the sub!
- Thank you PlayerTwoStart for renewing your sub! 38 months total!
- Thank you to everyone who was hosting the channel!
- Thank you to everyone that tuned in and played with me! I appreciate your company!
Make sure to never miss a stream by following my channel and turning your notifications on! You can also follow me on Twitter and Instagram for stream updates and other cool stuff posted daily!
Buy Mario Strikers: Battle League Now From Amazon.com
[Purchasing through this Amazon affiliate link gives me a small commission without adding any extra cost or effort to you. Thanks for your support!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943698.79/warc/CC-MAIN-20230321131205-20230321161205-00267.warc.gz
|
CC-MAIN-2023-14
| 869
| 9
|
https://forum.qorvo.com/t/dws3000-with-raspberry-pi/12038
|
code
|
Hi all! I have two DWS3000 eval board connected to two Raspberry Pi. I downloaded API DWS3000_Release_v1.1 and followed instructions for Raspberry. I have been able to get the simple RX and simple TX examples to work fully. However, ranging only works with ss_twr_initiator/responder_sts example. All the other exemples given get stuck in the frames exchange somehow, with some timeout raisen. I’ve been playing with the various timeout to set around, but with no luck so far, and I’m not sure this is the way to solve the issue… Should the examples given work out of the box? Anyone with some suggestions?
What kind of translation board are you using? And what version of the RPi board?
We use Waveshare ARPI600.
All the examples were tested on a slightly different DW3000 board using the above translation board, but the digital signals like the SPI bus, IRQ line etc. are the same. I’ll test the examples on the DWS3000 this week to see if there are any issues.
Hi Alec, I didn’t have a Waveshare ARPI600, so I made myself a “translation board” with a simple stripboard. I think connections are ok, since some of the examples fully works and even ranging in one case works…
Have you tested the examples?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572077.62/warc/CC-MAIN-20220814204141-20220814234141-00433.warc.gz
|
CC-MAIN-2022-33
| 1,222
| 6
|
https://mspoweruser.com/sign-up-for-skype-real-time-speech-translator-preview-today/
|
code
|
Few months back, Satya Nadella and Skype Corporate Vice President Gurdeep Singh Pall unveiled Skype Translator, Microsoft’s breakthrough in real-time speech and today they announced that they are rolling out a Skype Translator preview program. Visit the sign-up page here. The Skype Translator Preview will support a few languages at first and will initially be on devices running Windows 8.1 and Windows 10 Technical Preview only. Participation in the preview will be confirmed depending on: the date you registered; the devices you selected; the availability of selected languages; and registration code (if you have one).
We have seen tremendous interest and enthusiasm for Skype Translator from around the globe over the last few months and we’re incredibly excited to share it with the world. However, we want to hear from you first! The preview program will have limited spots available, so register today for your chance to secure a virtual spot in line!
The preview program will be free and will initially be available for Windows 8.1 computers and tablets only, and will kick-off with a limited selection of languages. That said, as part of the sign-up process, you’ll have the chance to tell us which languages are important to you, what platforms you’d like to see added, and how you plan to use Skype Translator once it becomes available. We look forward to this feedback, as it will help enhance future releases.
To be part of the Skype Translator preview program, register today
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500058.1/warc/CC-MAIN-20230203154140-20230203184140-00427.warc.gz
|
CC-MAIN-2023-06
| 1,500
| 4
|
https://boards.dingoonity.org/gcw-games-and-homebrew/port-requestminecraft-and-flappy-bird/
|
code
|
These are the requirements for Minecraft.
CPU: Intel Pentium D or AMD Athlon 64 (K8) 2.6 GHz
GPU (Integrated): Intel HD Graphics or AMD (formerly ATI) Radeon HD Graphics with OpenGL 2.1
GPU (Discrete): Nvidia GeForce 9600 GT or AMD Radeon HD 2400 with OpenGL 3.1
HDD: At least 200MB for Game Core and Other Files
Java 6 Release 45
CPU: Intel Core i3 or AMD Athlon II (K10) 2.8 GHz
GPU: GeForce 2xx Series or AMD Radeon HD 5xxx Series (Excluding Integrated Chipsets) with OpenGL 3.3
Latest release of Java 7 from java.com
With these in mind, it wouldn't be possible to port the standard release, but a pocket release, like that of IOS, may be possible. I don't know what the requirements are for the pocket release off of the top of my head.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347389309.17/warc/CC-MAIN-20200525161346-20200525191346-00440.warc.gz
|
CC-MAIN-2020-24
| 740
| 10
|
https://forum.predator.illfonic.com/t/voice-chat-wont-work-pc/29379
|
code
|
I can’t hear anybody’s voice chat at all - it’s really quite frustrating to not be able to use the FT’s voice chat to locate them, nor cooperate with FT members because I just straight-up can’t hear them at all.
I have disabled all other audio devices other than the one I use to listen to game-audio, as well - meaning it’s not a device issue.
I also tested the voice-to-text option and that seems to sometimes pick up audio and transcribes it with some amount of accuracy, until it simply stops working entirely.
Please, I’ve been playing this game since release and I have never been able to hear even a single player’s voice chat regardless of the fact that I’ve swapped PCs since then, as well as having done who knows how many reinstalls.
This finally needs to be fixed.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711151.22/warc/CC-MAIN-20221207085208-20221207115208-00130.warc.gz
|
CC-MAIN-2022-49
| 794
| 5
|
https://socratic.org/questions/a-337kg-crate-needs-to-be-lifted-to-a-height-of-2-3m-using-a-ramp-that-is-7-6m-l
|
code
|
A 337kg crate needs to be lifted to a height of 2.3m using a ramp that is 7.6m long. Ideally, how much work will it take to lift the crate?
I think that you can use the work done against gravity as:
with your data:
You can also try using the force needed to push the crate (equal to the
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882634.5/warc/CC-MAIN-20200703153451-20200703183451-00493.warc.gz
|
CC-MAIN-2020-29
| 286
| 4
|
http://zootool.com/watch/243n0xs/view:likes
|
code
|
27 Apr 11
Description: Finding the perfect IDE for Python isn’t an easy feat. There are a great many to chose from, but even though some of them offer really nifty features, I can’t help myself but feel attracted to VIM anyway. I feel that no IDE accomplishes the task of giving the comfort of complete power over the code – something is always missing out. This is why I always come back to using IDLE and VIM. Those two seem to be best companions when doing some quick and agile hacking – but when it comes to managing bigger and longer term projects, this combo needs some tweaking. But when it’s done, VIM will be a powerful IDE for Python – including code completion(with pydoc display), graphical debugging, task-management and a project view.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052286/warc/CC-MAIN-20131204131732-00035-ip-10-33-133-15.ec2.internal.warc.gz
|
CC-MAIN-2013-48
| 761
| 2
|
https://support.optimizely.com/hc/en-us/articles/4423155969037-Lead-synchronization
|
code
|
About lead synchronization
This topic describes the lead synchronization that lets you import leads generated on social networks or other external sources into Optimizely Campaign recipient lists.
Currently, lead synchronization is only available for leads from Meta (Instagram and Facebook Lead Ads).
Managing synchronization tasks
In the menu bar, select Administration > Lead synchronization. You can manage synchronization tasks as follows:
- Create. To create a new synchronization task, click Create…. (See Creating synchronization tasks below).
- Edit. Select an inactive synchronization task and click Edit…. Edit the synchronization task as described under Creating synchronization tasks.
- Delete. Select an inactive synchronization task and click Delete.
- Start. Select an inactive synchronization task and click Start.
- Pause. Select an active synchronization task and click Pause.
- Log. Select a synchronization task and click Log. The Log Messages window opens. In the overview, you can find information such as opt-in notifications and error messages for the selected synchronization task.
Creating synchronization tasks
- Click Create… and select the lead source.
- Log in to your account of the external service.
- Configure the synchronization task:
- Name. Enter a name for the synchronization task.
- Source Page. Select the source page from the drop-down list.
- Lead Form. Select the lead form from the drop-down list.
- Recipient List. In the drop-down list, select the recipient list to which you want to import the leads. (You can only select opt-in recipient lists.)
- Update existing recipient data. To update existing recipient data with that of the imported leads, activate the check box.
- Opt-in process. Select a double opt-in process from the drop-down list.
- Click Next.
- In the Row Mapping tab, assign a lead form field to each of the recipient list fields by selecting them from the drop-down lists. If you do not want to import data into certain recipient list fields, select ---.
- Select the Fixed value check box to change the drop-down list to an entry field in which you can enter any text or value (except for the Email field). This value or text is then imported into the corresponding field for each record.
- Click Next.
- Click Finish.
→ Newly created synchronization tasks are directly active and start automatically. For example, as soon as a user clicks on a Facebook Lead Ad and submits their data in the form, the contact data is imported into the selected opt-in recipient list.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474670.19/warc/CC-MAIN-20240227021813-20240227051813-00884.warc.gz
|
CC-MAIN-2024-10
| 2,544
| 27
|
https://www.safaribooksonline.com/library/view/oracle-plsql-programming/0596003811/ch08s03.html
|
code
|
Most of the time, working with strings is very straightforward. However, there are some subtle issues you should be aware of, as described in the next few sections.
One issue that often causes great consternation, especially to people who come to Oracle after working with other databases, is that Oracle treats empty strings as NULLs. This is contrary to the ANSI SQL standard, which recognizes the difference between an empty string and a string variable that is NULL.
The following code demonstrates Oracle’s behavior:
/* File on web: empty_is_null.tst */ DECLARE empty_varchar2 VARCHAR2(10) := ''; empty_char CHAR(10) := ''; BEGIN IF empty_varchar2 IS NULL THEN DBMS_OUTPUT.PUT_LINE('empty_varchar2 is NULL'); END IF; IF '' IS NULL THEN DBMS_OUTPUT.PUT_LINE(''''' is NULL'); END IF; IF empty_char IS NULL THEN DBMS_OUTPUT.PUT_LINE('empty_char is NULL'); END IF; END;
The output is:
empty_varchar2 is NULL '' is NULL
You’ll notice in this example that the CHAR variable is not considered NULL. That’s because CHAR variables, as fixed-length character strings, are never truly empty. The CHAR variable in this example is padded with blanks until it is exactly 10 characters in length. The VARCHAR2 variable, however, is NULL, as is the zero-length string literal.
You have to really watch for this behavior in IF statements that compare two VARCHAR2 values. Consider a program that queries the user for a name, and then compares that name to a value read ...
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592523.89/warc/CC-MAIN-20180721110117-20180721130117-00552.warc.gz
|
CC-MAIN-2018-30
| 1,466
| 8
|
https://mail.haskell.org/pipermail/glasgow-haskell-users/2002-January/002893.html
|
code
|
type definition with strict products
Wed, 30 Jan 2002 16:10:32 -0000
> I a file that is too large to post here completely, I have used:
> type Result val s =3D (# val, Steps s #)
> and I get the error message:
> Illegal unboxed tuple type as function argument: (# val, Steps s
> In the type: (# val, Steps s #)
> While checking the RHS of a type synonym declaration `Result'
> In the type synonym declaration for `Result'
Function arguments of unboxed tuple type aren't allowed. See
(sorry about the garbled URL, I have no control over the fact that they
get arbitrarily chopped at 72 columns :-()
Older versions of GHC were less strict about checking for illegal uses
of unboxed tuples, with the result that some invalid programs got
further through the system leading to compiler crashes or invalid code
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363510.40/warc/CC-MAIN-20211208114112-20211208144112-00572.warc.gz
|
CC-MAIN-2021-49
| 805
| 15
|
https://www.floppycats.com/how-to-pick-up-a-ragdoll-cat-how-to-hold-a-ragdoll-cat.html
|
code
|
How to Pick Up a Ragdoll Cat - How to Hold a Ragdoll Cat
I am asked many questions through e-mail and one of them is how to go about holding or picking up a cat. So I made a video of me picking up my two cats and my parents' two Ragdoll cats.
Here is a video of my mom and her identical twin sister picking up Hobbs:
How do you pick up your cat? How do you hold them once you pick them up?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711001.28/warc/CC-MAIN-20221205000525-20221205030525-00226.warc.gz
|
CC-MAIN-2022-49
| 389
| 4
|
https://it-journey.dev/docs/docs/setting-up-personal-website-on-gitlab-pages/
|
code
|
Introduce how to setup Jekyll website on GitLab Pages.
1. GitLab Pages
GitLab Pages are very similar to GitHub Pages. GitLab Pages also supports custom domain names and SSL certificates and includes a continuous integration platform. GitLab Pages supports static websites and builds any Static Site Generator (SSG), such as Jekyll, Hugo, Hexo, Middleman and Pelican.
There are two ways of getting started with GitLab Pages: either fork an existing project, or create a new one. In this posting, I will introduce how to migrate my personal Jekyll website from GitHub to GitLab.
2. Fork GitLab Pages Examples
2.1 Fork Existing Jekyll Project
Fork the repository https://gitlab.com/pages/jekyll. After fork, you will see a new project in the list.
2.2 Remove the Fork Relationship
Go to Settings > General, scroll down and expand ‘Advance’ and click the “Remove fork relationship” button.
2.3 Trigger Build
Edit any file to trigger a build. For example, edit file ‘README.md’. Stage and Commit the change. Then merge the request. The pipeline is started automatically to trigger a new build. After build is finished, merge the change. The pipeline starts running again. This time, the project is built and deployed.
2.4 Test Page
Go to Settings->Page, the website url hosted in GitLab Pages appears, click on it. We can see the Jekyll site.
2.5 Change Domain
Notice the URL of the site is https://bamr87.gitlab.io/jekyll/. We can change it to other domains, for example, https://bamr87.gitlab.io/
Go to Settings > General, put the domain name and click “Change path” button.
Go to Settings->Page, the url is updated.
Click the url and you will see the site is in new domain(Please wait for few minutes if you don’t see it immediately). However, there is something wrong here, the page doesn’t look good. This is because the main.css file is not linked properly.
We are able to see the cause with chrome debugger.
To solve the issue, edit
baseurl to empty string. Submit and merge the request.
After the site is successfully built and deployed, we can access it with the new domain.
3. Migrate Existing Site
3.1 Copy Files
Clone the project from https://github.com/bamr87/bamr87.github.io to local, remove all existing files and copy all of the files from github repository into this folder.
Some configuration files need to be updated.
source "https://rubygems.org" ruby RUBY_VERSION # This will help ensure the proper Jekyll version is running. gem "jekyll", "3.8.5" # Windows does not include zoneinfo files, so bundle the tzinfo-data gem gem 'tzinfo-data', platforms: [:mingw, :mswin, :x64_mingw, :jruby] gem "jekyll-seo-tag"
image: ruby:2.6 variables: JEKYLL_ENV: production LC_ALL: C.UTF-8 before_script: - bundle install test: stage: test script: - bundle exec jekyll build -d test artifacts: paths: - test except: - master pages: stage: deploy script: - bundle exec jekyll build -d public artifacts: paths: - public only: - master
publicis the default folder for GitLab Pages.
Start Jekyll with command ‘bundle exec jekyll serve’.
$ bundle exec jekyll serve Configuration file: /Users/Johnny/GitLab/bamr87.gitlab.io/_config.yml Source: /Users/Johnny/GitLab/bamr87.gitlab.io Destination: /Users/Johnny/GitLab/bamr87.gitlab.io/_site Incremental build: disabled. Enable with --incremental Generating... done in 31.496 seconds. Auto-regeneration: enabled for '/Users/Johnny/GitLab/bamr87.gitlab.io' Server address: http://127.0.0.1:4000 Server running... press ctrl-c to stop.
Access http://127.0.0.1:4000 or http://localhost:4000.
Test on GitLab
Push the changes to gitlab. After the site is successfully compiled and deployed, we are able to access it. The migration is done.
There is a limitation of GitLab Pages, each size can’t be larger than 1GB. My website has lots of images, now it can’t be deployed due to the ‘too large’ error.
ERROR: Uploading artifacts to coordinator... too large archive id=301083722 responseStatus=413 Request Entity Too Large status=413 Request Entity Too Large token=GqZyjRGe FATAL: too large ERROR: Job failed: exit code 1
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648245.63/warc/CC-MAIN-20230602003804-20230602033804-00083.warc.gz
|
CC-MAIN-2023-23
| 4,089
| 36
|
http://copan.me/blender-265-manual-43/
|
code
|
So keep reading this manual, learn the great tool that Blender is, keep . – December Fire and smoke improvements, anisotropic. This manual is a good start, though it serves more as a reference. . – December Fire and smoke improvements, anisotropic shader for Cycles. Community · Manual · Tutorials · Python API · Developers Forum · · News · Jobs · Website · Contact · Open DataNEW. Follow Blender. Development .
|Published (Last):||2 November 2009|
|PDF File Size:||9.99 Mb|
|ePub File Size:||14.32 Mb|
|Price:||Free* [*Free Regsitration Required]|
Addons can encapsulate certain functionality neatly for writing tools to improve your work-flow or for writing utilities for others to use.
Cycles gets basic volumetric support on the CPU, more improvements manial the motion tracker, two new modeling modifiers, some UI consistency improvements, and more than bug fixes. Although there were clearly shortcomings in the then current version of Blender, such as a complex internal software architecture, unfinished features and a non-standard way of providing the GUI, the enthusiastic support from the user community and customers who had purchased Blender Publisher in the past meant that Ton could not justify leaving Blender to fade into insignificance.
You can find out blenddr information by visiting the 2. NeoGeo quickly became the largest 3D animation studio in the Netherlands and one of the leading animation houses in Europe.
This large inflow of cash enabled NaN to rapidly expand its operations. These properties are bldnder differently to typical Python class attributes because Blender needs to be display them in the interface, store their settings in key-maps and keep settings for re-use.
This next addon is simple but shows how to integrate a script into Blender using an Operator which is the typical way to define a tool accessed from menus, buttons and keyboard shortcuts. Everything here has been covered manyal the blendet steps, you may want to try run the addon still and consider what could be done to make it more useful.
Preview release of the 2. For API documentation on the functions listed above, see: However running the script wont move any objects, for this you need to execute the newly registered operator.
Creative Freedom Starts Here
Cycles gets volume and SSS support on the GPU, pie menus are added and tooltips greatly improved, the Intersection modeling tool is added, new sun beam node for the compositor, Freestyle now works with Cycles, texture painting workflow is improved, and more than bug fixes.
Blender goes Open Bleder 13 October Lots of fixes, and some Game Engine features. Operator properties are defined via bpy. In JulyTon managed to get the NaN investors blendsr agree to a unique Blender Foundation plan to attempt to release Blender as open source.
Now try copy this script into Blender and run it on the default cube. There are many arguments you can pass to properties to set limits, change the default and display a tooltip.
Notice this addon does not do anything related to Blender, the bpy module is not imported for example.
Game Engine returns, ambient occlusion, new procedural textures. It was the release following Project Peach. NeoGeo created award-winning productions European Corporate Video Awards and for large corporate clients such as multinational electronics company Philips. Soon NaN boasted as many as fifty employees working around the world trying to improve and promote Blender.
These properties from bpy. The first truly open source Blender release.
At the core of NaN was a desire to create and distribute a compact, cross-platform 3D application for free. This was the release following Project Apricot. Dynamic topology, rigid body simulation, improvements in UI and usability including retina display supportCycles now supports hair, the Bevel tool now supports individual vertex beveling, new Mesh Cache modifier and the new UV Warp modifier, new SPH particle fluid solver.
Note The destination of the addon depends on your Blender configuration.
End of C-key, Blender full freeware again. Notice how the key-map item can have a different total setting then the default set by the operator, this allows you to have multiple keys accessing the same operator with different settings.
Full rework of armature system, shape keys, fur with particles, fluids, and rigid bodies. Enter search terms or a module, class or function name. To find the identifier of a menu you can hover your mouse over the menu item and the identifier is displayed.
For docs on extending menus see: Blender in development at animation studio NeoGeo. After careful deliberation Ton decided that the current in-house 3D tool set for NeoGeo was too old and cumbersome to maintain, and needed to be rewritten from scratch. Dive Into Python sections 1, 2, 3, 4, and 7.
Directly executing the script multiple times will add the menu each time too. You can also find addon path locations by running this in the Python console. Within NeoGeo Ton was responsible for both art direction and internal software development. More than bug fixes.
While this is handled vlender a fairly Pythonic way, be mindful that you are in fact defining tool settings that are loaded into Blender and accessed by other manuall of Blender, outside of Python. As NeoGeo continued to refine and improve Blender it became apparent to Ton that Blender could be used as a tool for other artists outside of NeoGeo.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836368.96/warc/CC-MAIN-20191023225038-20191024012538-00509.warc.gz
|
CC-MAIN-2019-43
| 5,456
| 25
|
https://www.cryptocontrol.io/en/about/how-it-works
|
code
|
How CryptoControl works
CryptoControl is meant to be your go-to place for all the information about crypto.
We do this by taking by first taking in large amounts of data (both from paid and private sources), then applying complex AI & NLP engines (to make some sense out of the data) and finally present it in a easy-to-understand format through our mobile app and website.
We have spiders crawling over 1000 sites every 5 minutes, fetching news articles and processing them. We also collect data from Reddit, CoinMarketCap, Twitter.
Once our spiders collect all the articles, they are sent to our special recommendation engine which has a built-in AI that categorizes and understands the context of each article.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578531462.21/warc/CC-MAIN-20190421120136-20190421142136-00373.warc.gz
|
CC-MAIN-2019-18
| 713
| 5
|
https://blog.sea-jay.net/gamedev/2018/07/06/announcing-project-lily-tin.html
|
code
|
Over the past couple of months, I’ve become increasingly aware of how little I’ve done with the C++ programming language outside of my university. I’ve made a simple tool here or there, but never anything impressive or super relevant to my career.
To fix this, I’ve decided to make a small game to better demonstrate my knowledge. It’s going to be in 2D, because I’ve never written a full game in this language before, and I feel that I work best when I don’t have to worry about the z axis.
I’m also making this open source too, because these days, I don’t see any reasons to keep code to myself.
If you want an idea of how it might play, think of breakout mixed with boxing. I’ve been playing quite a lot of WarioWare, Inc.: Mega Microgames recently and I’ve been inspired by the simplicity of the score-based minigames you get for completing stages, and how each manage to do a lot with limited controls.
The ‘Jump Forever’ minigame only has one button. Well, to begin with anyway. Apologies for the blurry screenshot.
I’m imagining that instead of a paddle you move back and forth, you instead have a pair of boxing gloves that need to be charged first. The ball is subject to simple gravity, so you have to get into a rhythm of anticipating and hitting the ball at the right time. Why you can’t just punch the bricks yourself is something I’m still thinking about.
I’m giving it the code name of ‘Lily Tin’ for now, but I’m going to try my hardest to find a better title for it before release.
In order to make a game, you need a couple of things. You need a way of letting the player tell the game what to do, you need a way of providing feedback to the player once they’ve told it what they’d like it to do, and you as the programmer need a way to tell the game what feedback to give that player to let them know what their input did. In other words, a feedback loop.
Most people use a game engine like Unity or Unreal to take care of these things for them, but for this project, I’ve decided to go for broke and write my own engine from scratch. Whilst most people I’ve talked to have claimed that this is a terrible idea, given the explosive complexity of modern, general purpose solutions, I felt that the opportunity to learn something outweighed the sheer amount of time needed to create the thing in the first place.
Right now, I’m not worrying to much about what other games I can make with it. It’s going to be bespoke. It only needs to run this one game on one platform and not explode in the process. That’s it.
OK, so I’ve decided that I’m going it alone. Great! But that doesn’t solve all of the problems I mentioned before. Using C++ for its vast list of features and exceptional performance is a good start, but that doesn’t make my feedback problem go away. Unlike more modern languages like Java and C#, C++ doesn’t have much - if anything - in the way of displaying graphics or pumping audio into your ears. In fact, it almost predates the need for those things entirely. So what options do I have?
1. Forget graphics; do it in text
While C++ does have a wide range of tools for console output, and it would make things much easier, I can’t name a single game I’ve played offline from this decade that actually did it this way. Plus, the game I have in mind really needs something more elaborate than that.
2. Actually doing it from scratch
So when I said that I wanted to do this from scratch, that wasn’t entirely accurate. While it is possible for me to display graphics and audio entirely on my own, doing so brings its own set of problems. The main one of these is that no-one can agree on how to put stuff on the screen.
This is to be expected from personal computers, of course. After all, each of the three big operating systems I hope to support (Windows, Mac, and Linux) were all made by different people at different times. Besides; I want to make a game, not another abstraction layer. Which brings me to my next best choice:
3. Just use a library
This might seem obvious. Why didn’t I mention it before? If throwing stuff at the screen is such a common problem, shouldn’t someone else have found a solution already? Well of course they have. It’d be pretty weird for me to talk about games like this if they didn’t.
Trouble is, there are a lot of libraries out there; each of them with their own strengths and weaknesses:
It’s almost impossible to talk about this one without first mentioning GLUT, or the OpenGL Utility Toolkit. Back in the ninties and noughties, if you needed to learn graphics, specifically using OpenGL, GLUT and its derivatives were your go-to solution.
GLUT provided a lever into the OpenGL API using familiar C functions, as well as windows and basic input. Many of it’s basic functions are still in use today as they bleed into the specification itself.
It was easy to use, if a bit unweildy, and I’ve heard legends of people making games in it before, but overall it was unsuited to any ‘serious’ development. It’s no surprise then that it was cancelled in 2005 and hasn’t been touched since.
Enter GLFW, or the OpenGL… actually, I have no idea what it stands for. I can’t find it anywhere! I’m going to say ‘framework’. GLFW feels like a replacement for GLUT made by people who know what they’re doing. Whilst it’s certainly interesting and provides a great deal of control, you still have to do almost everything not related to the operating system yourself. I’d only really use this if I needed to do something specific where the size of the library was more important than it is now.
This is the one that you’ll probably seen the most of, thanks to its use in Valve’s Source engine. It occupies a similar space to GLFW, but is much larger and has support for the DirectX API for better performance on Windows.
This would be a perfectly fine choice if it wasn’t for one fatal flaw: it’s written in C. Now while C is a great language in it’s own right, it’s missing two things critical to how C++ operates: classes and namespaces. These two things fundamentally change how you interact with the program, and tend to clash when put together. It’s not impossible to make them work, but doing so requires a level of architecture that I feel is unnecessary for this project. Plus, I’d have to write my own audio code, which would be a massive hassle for something this small.
This is the one that I ultimately went with, and the main reasons why is that it’s based on C++, not C. All of it’s functionality is properly namespaced, and as such is much easier to integrate it in my project. Plus, it has it’s own module for audio, so I don’t have to bother with the nitty gritty of beeps and boops.
Fantastic! It seems like I’m on to a winner. A comprehensive library written in the same language that’s easy to implement and has a focus on 2D rendering. All’s well that ends well, right? Not exactly.
Whilst the library is relatively easy to install, it has a large amount of dependancies and a wide girth in general. In this case, I feel like that’s a worthwhile payoff so long as I make good use of all of its features, in each of the modules but it is worth considering for future projects.
So I’ve decided on a language, I can give feedback to the player and I can process input. What else is there? Well, once I’ve written our code, I need some way of compiling it into an executable, as well as some way of linking the libraries to that executable.
I’m a Windows developer mainly, so my first instinct is to boot up Visual Studio and muddle my way through the menus. This is a working solution, but it’s not the best way of doing things, or even the most efficient.
The other day, I was trying to compile a game for an aquaintance of mine. As above, they made it in Visual Studio and distributed it that way. I tried debugging it and it wouldn’t start. Why?
Honestly, I’m not entirely sure, but I’m pretty sure it was to do with the version. Visual Studio abstracts a lot of things, which is good because it makes it easier to use, but also bad because of how delicate things can get. Especially when it comes to updates. If one thing goes out of place, your game will just refuse to start unless you spend hours toiling in the Properties menu.
Keep in mind that my game’s open source. We’re in the big wide world here with living, breathing people that have opinions and preferences. There’s a big chance that whoever works on my game next won’t use Visual Studio, or even have a Windows computer at all. This is where meta-build systems like CMake come in.
C++ is a compiled language. This means in order for a computer to meaningfully use your code, you have to convert it once for each machine you want to target, on real (or sometimes virtual) computers.
It’s not like Python where your code is interpreted each time it’s run. It’s tied to your hardware. Even worse is when you write C++ on one computer using that computer’s tools and hand it off to someone else. You have no idea if it’ll work or not, and most of the time it doesn’t.
CMake fixes this by having you create a single, interpreted script that tells
your computer how your project should be built regardless of what kind of
computer you’re using, called
CMakeLists.txt. You write your code and headers
as normal, but instead of linking them to a Visual Studio solution, for example,
you tell CMake to run the script and do it for you.
Using this method, you can create a stupid amount of different build chains for
a ludicrous number of compilers and platforms, but the ones I’m most interested
in are Makefiles and Visual Studio solutions (
The downside of this of course is that you’ve got to learn an entirely new programming language, though I’ve spent enough time with CMake at this point to have a reasonable idea of how it works. As for configuring the latest version of SFML with it, that was a tricky process due to the lack of documentation available for 2.5.0 at time of writing.
I have managed to do it, but I think I want it to have a post all of it’s own. Stay tuned for that.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474541.96/warc/CC-MAIN-20240224144416-20240224174416-00107.warc.gz
|
CC-MAIN-2024-10
| 10,209
| 46
|
http://ratemydrawings.com/drawings/animation/295744.html
|
code
|
recipe--> spongecake Animation
the website where i got my recipe from is-- [Link]
thanks for watching.. oh and also. if u have questions about spongecakes i cant answer them cause unfortunately i dont know anything about sponge cake.
Created Jul 17, 2008. Unless noted Copyright 2008 anno101 .
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345775580/warc/CC-MAIN-20131218054935-00089-ip-10-33-133-15.ec2.internal.warc.gz
|
CC-MAIN-2013-48
| 293
| 4
|
http://riscpi.co.uk/openwrt-for-raspberry-pi/?doing_wp_cron=1600919971.1406810283660888671875
|
code
|
OpenWRT for Raspberry PI
by editor : Be the first to leave a comment
An anonymous author on google sites has worked out a hack to run OpenWRT on a Raspberry PI. He/she noticed that it’s not yet suitable for production environment. For example:
- SD Card failures: suddenly something happens and root filesystem is in readonly mode. No way to recover, expect by a reboot. (mount -o remount, rw /dev/root did not work)
- Non-working/missing kernel modules: many of us want to add a RTC clock to raspberry, well, currently OpenWRT’s kernel and modules don’t have proper support for i2c. Nor many other peripherals available for RasPI. Well, even with working i2c.. I would still be missing modules for rtc devices, such as rtc_ds1307 (which supports ds1338 that I have installed to my Raspberry PI)
- Rebooting issue. Only way to reboot is this: kill -9 <pid of watchdog> – or killall -9 watchdog. Actually this is not entirely true, as there’s a patch for this.. It just hasn’t been included yet in images available at openwrt site’s downloads section.
- Boot issue: didn’t boot out-of-the-box on my Raspberry Pi model B with bigger memory.. Had to get newer bootcode.bin and other files from git first and update configurations..
To see what solution was implemented, and download the hack, go to google sites.
And if you are the author do drop us a line and tell us who you are!
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400213454.52/warc/CC-MAIN-20200924034208-20200924064208-00305.warc.gz
|
CC-MAIN-2020-40
| 1,394
| 9
|
http://www.fortunesember.com/?p=283
|
code
|
Site was down for a bit. My bad! Had a bad plugin. Problem solved.
Anyways, finishing my Mandalorian outfit still, then starting on Red Hood. Should be fun to do something new, right?
Haven’t sleep much in a week. Having massive insomnia, and no idea how to stop it. I’m lucky to get 3 hours of sleep a night right now. Expect an actual blog post in a few days.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109157.57/warc/CC-MAIN-20170821152953-20170821172953-00336.warc.gz
|
CC-MAIN-2017-34
| 365
| 3
|
https://ziccibea.com/nvidia/nvidia-1juloq/
|
code
|
NVIDIA 340.84 DRIVER DETAILS:
|File Size:||3.3 MB|
|Supported systems:||Windows 10, Windows 8.1, Windows 7|
|Price:||Free* (*Registration Required)|
NVIDIA 340.84 DRIVER (nvidia_340_1194.zip)
Here is a step by step manual guide for nvidia quadro k2100m software installation process on windows 7 / 8 / 8.1 / vista / xp. With the cuda toolkit, you can develop, optimize and deploy your applications on gpu-accelerated embedded systems, desktop workstations, enterprise data centers, cloud-based platforms and hpc supercomputers. Nvidia geforce gt 340 driver download and update for windows and linux. Nvidia quadro display driver 340.66 for windows 7/8 64-bit this is the latest driver for nvidia quadro desktop and notebook chips. I am now on the 5.1.3 kernel with the nvidia-340xx 340.107-84 video driver. Python bindings for windows vista / xp. For customer use of service e. 52, you installed the nvidia products.
Geforce 8400 se, download nvidia geforce 8400 se driver v.340.58 v.340.58 pour linux x86 64. The information on this page is only about version 340.84 of nvidia graphics driver 340.84. Quadro notebook drivers are not supported on vista os. 3 kernel with a system selinux file type 'modules object t'. To add the distribution's native package management format. Using nvidia driver 340.84, we have to put the server into user mode bios setting to utilize the nvidia gpus. Now you want to remove nvidia linux.
NVIDIA Graphics Driver.
Python bindings for my nvidia end user license agreement. Tue, you want to the 340. This is too old or run. I never really done these kind of videos so. The new geforce game ready driver, release 340.52, allows geforce owners to continue to have the ultimate gaming experience for metro, redux and final fantasy xiv china . The new geforce game ready driver, release 340.52 whql, allows geforce owners to continue to have the ultimate gaming experience.
Rtx 2060 pas cher ou d'occasion sur Rakuten.
84, i am now starting from 7 / xp. Click the register before installing packages of videos so. Does anyone else have problems developing opengl applications for quadro cards? I have installed the drivers as recommended from nvidia website 340.84 and the display adapter shows correctly in device manager. New in geforce game ready drivers the new geforce game ready driver, release 340.52 whql, allows geforce owners to continue to have. Geforce gt 340, geforce gt 330, geforce gt 320, geforce 315.
Does anyone else have the latest nvidia provides certified drivers today. Python bindings for compatible controller, release 340. Python bindings for microsoft windows graphics cards. Nvidia has released a new set of windows graphics drivers for quadro workstation graphics cards. The nvidia cuda toolkit provides a development environment for creating high performance gpu-accelerated applications.
|Install Latest Nvidia Driver 340.46 via PPA in Ubuntu.||Also, the 340 series has been forked into its own series of packages to support older cards.||01, 00.0 vga compatible controller, nvidia corporation device 1f91 rev a1 prog-if 00 vga controller all over the internet i saw very different explanations, followed various guides for a week now starting from arch wiki, ofc but i failed getting my card up an running.|
|340.46-4 memory leak, NVIDIA Developer Forums.||Download *this download includes the nvidia-340xx 340.||Sh./nvidia-linux-x86 64-340. one of the last installation steps will offer to update your x configuration file.|
Quadro notebook drivers are not supported on windows vista 32/64-bit operating systems. Click the search button to perform your search. With this message then you may want to register link above. Python bindings for windows 7 and loads a dla. Oems may interact better with, grid vapps or mismatched driver. A way to remove nvidia graphics driver 340.84 with the help of advanced uninstaller pro nvidia graphics driver 340.84 is an application offered by the software company nvidia corporation. Has been removed from the active host thread executes the 5.
Hello, we recently got a pe m610x second hand as well as an nvidia tesla m2075 card. By clicking the agree & download button below, you are confirming that you have read and agree to be bound by the license for customer use of nvidia software for use of the driver. For the installation process on the selection below.
Download beta and older drivers for my nvidia products. Note that many linux distributions provide their own packages of the nvidia linux graphics driver in the distribution's native package management format. Python bindings for the nvidia management library. Details for use of this nvidia software can be found in the nvidia end user license agreement. Manually search for drivers for my nvidia products. The driver will begin downloading immediately after clicking on the agree & download button below. 66.84 mb, download *this download includes the nvidia display driver and geforce experience application.
Improved compatibility with recent linux kernels. ORIGINAL. Install nvidia 340.46 via ppa, besides using the official installer, we can easily install the driver from a launchpad ppa. 2 for the first time, i'm using linux mint xfce 19.2 in a laptop lenovo thinkpad t61 with nvidia graphics card g86m - quadro nvs 140m . Owners to support for nvidia products.
Improved compatibility with a reference driver will offer to think this. 269.84 mb, download *this download includes the nvidia display driver and geforce experience application. This may interact better with the rest of your distribution's framework, and you may want to use this rather than nvidia's official package. 2 thoughts on nvidia r340.84 for quadro desktop and notebook released mcleary 20 at 02, 27. Release note and supported gpus are available in the nvidia page. Note that can take advantage of service e.
I've connected a graphics adapter to my ubuntu 14.10 server. Desktop and quadro, launch and supported on supported nvidia products. The newest 340 drivers contain a memory leak that causes x to hang requiring a restart of xorg/greeter. Automatically scan your pc or search the driver database for compatible gpu drivers. I found on another forum that nvidia drivers version 415.23 and newer have fixed a build failure, unknown type name ipmi user t', when building the nvidia kernel module for linux kernel 4.20. I've had this problem for way to long and decided to contact nvidia luckily they came up with a simple solution. The gpu is working on the server but not on the vm. Driver Samsung Mini Laptop Camera For Windows 8 Download.
Nvidia provides these notes to describe performance improvements and bug fixes in each documented version of the driver. Quadro vdws, download the nvidia page. Release date, optimize and hpc supercomputers. Asrock ddr2.
QLOGIC BCM5716C GIGABIT ETHERNET DRIVER DOWNLOAD (2020). Update your graphics card drivers today. Returns in *device the device on which the active host thread executes the device code. Download drivers for nvidia graphics cards, video cards, gpu accelerators, and for other geforce, quadro, and tesla hardware. This will help if you installed an incorrect or mismatched driver. Enterprise customers with a current vgpu software license grid vpc, grid vapps or quadro vdws , can log into the enterprise software download portal by clicking below.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198287.23/warc/CC-MAIN-20200920161009-20200920191009-00567.warc.gz
|
CC-MAIN-2020-40
| 7,355
| 22
|
https://www.datawarehousecenter.com/tag/cloud-computing/
|
code
|
Recorded on Mar 23 2016 at GCP NEXT 2016 in San Francisco.
Open-source data tools allow you to process data in volumes not possible a few years ago, but they can be hard to setup, maintain, and run economically. With years of practical and research experience, Google Cloud Platform is engineering new ways to use existing frameworks, such as Apache Spark and Hadoop, and designing the next generation of data processing tools in Apache Beam (incubating).
Are you looking to deploy your mission-critical application to a managed Postgres database service that is secure, reliable, available—as well as performant and scalable? If yes, this session is for you.
You will learn about the new Flexible Server option in Azure Database for PostgreSQL that gives you zone-redundant HA as well as more control of your database configuration, networking options, and maintenance window. You will also learn how Citus—an Open Source extension to Postgres that transforms Postgres into a distributed database—is changing what’s possible for data-intensive applications. We’ll walk through some interesting applications that are already scaling out with Hyperscale (Citus), a built-in deployment option in #Microsoft #Azure Database for PostgreSQL. And we’ll explore how Kubernetes has opened the door to running Postgres on Azure in hybrid scenarios, too.
Join us for an introduction to the latest release of SQL Server and learn about all its new capabilities from cloud-connected to built-in query intelligence.
0:50 What’s new
1:42 The next step for SQL Server
2:13 SQL Server 2022
5:09 Query Store and Intelligent Query Processing
7:09 Industry-leading database engine
8:37 Data lake virtualization and object storage
10:13 Extending T-SQL
11:47 Getting started
System administrators have a diverse set of roles and responsibilities, they can range from configuring servers, monitoring the network, setting up new users and computers, and more. Think of a system administrator as a tech generalist, they handle many different things to maintain reliable computer systems in a multi-user environment.
0:00 Maintaining Reliable Computer Systems
1:57 What is Systems Administration?
4:23 Servers Revisited
8:41 System Administration: The Cloud
12:16 Organizational Policies
14:43 IT Infrastructure Services
15:48 User and Hardware Provisioning
19:41 Routine Maintenance
20:46 System Administrators: Vendors
22:36 Troubleshooting and Managing Issues
25:41 In Case of Fire, Break Glass
26:57 With Great Power Comes Great Responsibility
30:44 Never Test in Production
33:26 Assessing Risk
35:22 Fixing things the Right Way
This video is part of the Google IT Support Certificate, which introduces learners to troubleshooting, customer service, networking, operating systems, systems administration, and security. The program, created by Google employees in the field, is designed to provide you with job-ready skills in about 6 months to start or advance your career in IT.
To access the full program content including readings, practice exercises, job search help, and discussion forums please visit ► https://goo.gle/3oQB1i9
Why earn a Google Career Certificate?
► No experience necessary: Learn job-ready skills, with no college degree required.
► Learn at your own pace: Complete the 100% online courses on your own terms.
► Stand out to employers: Make your resume competitive with a credential from Google.
► A path to in-demand jobs: Connect with top employers who are currently hiring.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949355.52/warc/CC-MAIN-20230330163823-20230330193823-00573.warc.gz
|
CC-MAIN-2023-14
| 3,504
| 36
|
http://work-ethic.net/asp-net-resume-sample.html
|
code
|
Asp Net Resume Sample
20 examples to show you how to write a software engineer resume.
Asp net resume sample. Learn aspnet for visual studio 2015 through expert step by step instruction. Use our sample and a template. It provides elapsed time in seconds since the page was initialized. I have found sample code to print to a picturebox control but the sample code always wiped the picture when printing to it.
A complete guide to writing a resume for software developers. How to upload and download files in aspnet core mvc. Download source code from github. Computer science bsc graduate looking for a job where i can leverage my knowledge of c and aspnet mvc architecture working with a highly effective team in the medical field.
Sample resume preschool teacher resume. This article provides a sample resume format for those applying for the post of preschool teacher. Under the top level information there is trace log which provides details of page life cycle. 12 minutes to read contributors.
Professional summary msbi developer 7 years of experience as business intelligence developer and data analyst in production development and staging environments. In an empty project update startup class to add services and middleware for mvc. Using asynchronous methods in aspnet mvc 4. This tutorial will teach you the basics of building an asynchronous aspnet mvc web application using visual studio express 2012 for web which is a free version of microsoft visual studio.
So i have used a few lines from the sample and wrote a class to continually print on a line or add a new line when necessary.
- Resume Samples Career Objective
- Dementia Caregiver Resume Sample
- Sample Resume For Assistant Professor In Engineering College
- Technical Writing Resume Samples
- Hr Director Sample Resume
- Sample Of Contract Agreement For House Rental
- Sales Contract Template
- A Good Resume Sample For Fresh Graduate
- Sample Cover Letter For Resume Customer Service
- Teacher Resume Sample Pdf
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202303.66/warc/CC-MAIN-20190320064940-20190320090940-00022.warc.gz
|
CC-MAIN-2019-13
| 1,988
| 17
|
http://kmeleon.sourceforge.net/forum/posting.php?2,reply,115393,quote=1
|
code
|
: K-Meleon Forum
K-Meleon development related discussions.
[quote=JamesD] @ Matt Oops. Let me do some more checking. Sorry, I did not test completely. I got in a hurry to leave on trip. Anyway there is a missing & in the code. Replace the following line [code] $_F_Mgr_i_Error ? &_F_Mgr_i_Language3 : _F_Mgr_i_Language2 ; [/code] with this line [code] $_F_Mgr_i_Error ? &_F_Mgr_i_Language3 : &_F_Mgr_i_Language2 ; [/code][/quote]
K-Meleon forum is powered by
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794868239.93/warc/CC-MAIN-20180527091644-20180527111644-00177.warc.gz
|
CC-MAIN-2018-22
| 458
| 4
|
https://blenderartists.org/t/blender-2-5-quick-edit-with-gimp/482874
|
code
|
How do i use gimp with blender 2.5, every time i try it encounters an error, do i need to set something up?
that’s not news!
in any case, they are separate programs. What is stopping you from opening Blender’s output image in Gimp? if you didn’t save the image to a folder of your preference, it’s usually output to some temp folder…
One issue you may be having is you are working on a blend file that hasn’t been saved at all. Try saving the blend file to a directory and trying again.
any error message in the console?
Here is a picture of what happens
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364764.57/warc/CC-MAIN-20210302190916-20210302220916-00597.warc.gz
|
CC-MAIN-2021-10
| 566
| 6
|
http://steamcommunity.com/sharedfiles/filedetails/updates/92940102/1398256751
|
code
|
STORE COMMUNITY ABOUT SUPPORT
Greenlight is being retired. For more information on how to submit games to steam, refer to this blog post.
This item has been removed from the community because it violates Steam Community & Content Guidelines. It is only visible to you. If you believe your item has been removed by mistake, please contact Steam Support.
This item is incompatible with Greenlight. Please see the instructions page for reasons why this item might not work within Greenlight.
Current visibility: Hidden
This item will only be visible to you, admins, and anyone marked as a creator.
Current visibility: Friends-only
This item will only be visible in searches to you, your friends, and admins.
Claustrophobia: The Downward Struggle
Claustrophobia Development Log #9
April 23, 2014 - The Indie Forge
Hello all! Welcome to Claustrophobia Dev Log #9! Once again, apologies for the month of absence. My University work is finally done (hooray!), which means Claustrophobia now has my full attention. Since the last update was so long ago, I have a lot to talk about. So I’ll keep this bit short, and here goes!
Randomly generated gear was a massive part of V1, but it lacked much of the item depth that I really wanted to get into the game. An item was only considered “better” than another piece if it had a higher value for your character’s single base stat, which not only made gear progression fairly boring, but also made Plate armour almost always the best choice, due to the high Armour Rating.
This time, gear generation is much deeper, partly due to the changes and additions to base stats, but mostly due to item properties. Item properties modify all sorts of things, from elemental damage, critical strike chance, increased gold find, lifesteal, chance to cause status effects, etc. The finished game will have a large number of different item properties, ranging from common stat modifiers, to unique passive abilities, such as summons and spell effects. The generator has an already massive selection of rules on how items should be created, based on the item type, spawn level, rarity, and base stat type. Here is an example of a selection of level 10 items:
Stat values are yet to be balanced (that staff for example, has waaaay too much damage. Then again, it is “The Devourer”…), but this gives you an example of the sort of thing to expect. I’ve seen some absolutely ridiculous level 50+ legendaries generated with 10 or more properties, which just made me happy. I have some great ideas for new properties too.
Of course, tonnes of gear would not be fun unless your character is running around wearing it! So of course, visual equipment returns:
Once again, graphics pictured here subject to change! While working on this system, I also decided to trial something that was not possible in V1 due to the limitation of the sprite size: visual weapons. While they would not be animated due to lack of time and artistic skill on my behalf, they do, in my opinion, look pretty cool, and they just add a little bit more to character customization as a whole.
I do have one issue however - how they should be displayed. Due to the nature of Claustrophobia’s sprites, the player will always be locked in the “standing” stance pictured above. This causes a couple of problems when it comes to lining the weapons up to the player’s hands. For example:
In option 1, a natural position is used, which unfortunately covers the player’s face when facing right or dual wielding. In option 2, the weapons are kept away from the face, but the positioning is unnatural. So, I’d like to know everyone’s opinion on this. Should I:
- Not show weapons (easiest, but no weapon graphics :’( )
- Use Option 1 (natural, obscures face)
- Use Option 2 (no obstruction, unnatural)
- Angle the weapons vertically (I also tried this, but I felt the weapons were to close to the body. Also wouldn’t really work with bows…)
- Something else? (any better ideas?)
Saving and Loading
Finally, the majority of the code for saving and loading the game has been written. Since the new engine is structured in a much nicer way, there shouldn’t be any of the weird loading oddities that V1 suffered (I’m looking at you teleporting doors). Once again I’m using XML, but I’ll need to look into encrypting everything this time. The new system will also allow multiple save files.
As I said before, these two systems in place represent half of the major systems left to do. Of course, both of these need finalizing (generated items don’t currently save, for example), but they’re in place. Which just leaves the skill system and character creation before I can move on to working on content!
Thanks for reading! Until next time,
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647556.43/warc/CC-MAIN-20180321004405-20180321024405-00540.warc.gz
|
CC-MAIN-2018-13
| 4,758
| 28
|
https://cncnz.com/downloads/generals-downloads/
|
code
|
Generals Combat Cards
Download CombatCards.exe (842 KB)
A free, promotional mini-game from the time Generals was first released. Its gameplay is simple – the player chooses a faction (one of the three from the full game, of course), and plays against the AI in a way that they both choose a unit’s statistic. The card which has the less optimal value of that statistic is moved to the other player’s deck as a captured unit. The game ends when either player loses all cards.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141737946.86/warc/CC-MAIN-20201204131750-20201204161750-00586.warc.gz
|
CC-MAIN-2020-50
| 480
| 3
|
https://www.hairandbeauty21.com/scissors-pouch-hair-stylist-tools-bag-multifunction-hairstylist-holster-pouch-anself-leather-scissor-comb-hairdressing-belt-pouch-color-brown/
|
code
|
Jan 09,2021 05:53:49 UTC –
Removable design, easy to disassemble or assemble, easy to clean broken hair, adjustable strap, easy to carry.
Help you always put the most important tools around you.
5 scissor pockets and 2 pockets for more haircut tools.
Made of PU leather, the leather is soft and comfortable, which can protect the scissors from scratches and is durable.
Available in 3 colors including Black, Brown, Coffee, Orange, Yellow.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513194.17/warc/CC-MAIN-20210117205246-20210117235246-00639.warc.gz
|
CC-MAIN-2021-04
| 441
| 6
|
http://www.pcguide.com/ref/video/3dImages-c.html
|
code
|
[ The PC Guide | Systems and Components Reference Guide | Video Cards | 3D Video Acceleration ]
3D Images and Operations
3D images are much more complex than 2D images because of the much greater amount of
information that must be used in order to create a realistic 3D world. In addition,
several mathematical operations must be used in order to convert this 3D world to one that
can be projected on a computer screen.
When you look at the world, your eyes and brain do this automatically. Most of the
operations that allow you to perceive a three-dimensional world occur so seamlessly that
you don't even realize they are happening. Have you ever wondered how you can, say, look
at a nature scene of a mountain range, forest and a lake, and know that the
mountains are farther away than the lake is, for example? This occurs through a complex
interaction of visual effects--light levels, shadowing, relative motion--combined with
your own knowledge of how the world works. The job of a 3D graphics engine is to duplicate
this to whatever extent possible so that what you see on the screen seems realistic to the
3D images are handled inside the computer using abstract models. Usually, each 3D
object is composed of hundreds or even thousands of small triangles (or other polygons)
that describe its structure. When the program wants to move an object, it manipulates the
corners of the triangles to create movement. (This is highly simplified of course but
gives you the general idea). Of course real objects aren't made up of thousands of
triangles but doing it this way is necessary in order to do the animation.
The heavy computation work is involved in converting these hollow triangles into solid
surfaces. In the real world, objects aren't islands; they interact. They overlap one
another, cast shadows, reflect light, and they appear dimmer when in the distance. There
are very complex mathematical equations that are used to determine when an object is
visible in a scene based on a given angle, what color it should be, etc. If you are
playing a 3D game and want smooth animation, these calculations must be redone 20+ times
per second! This is why 3D accelerators are used--they are customized to performing these
Each time the screen is recalculated (due to movement in a game, for example), it is
necessary to recalculate the color and intensity of each pixel on the 2D screen! This is
done by applying different 3D computations to the scene, in a process that is called rendering.
There are several different types of computations that are performed in 3D processing.
Some cards support more of them than others, and some are more efficient at certain ones
than others are. Here are some of the more common 3D operations:
- Gourad Shading: This is an algorithm that is used to give 3D surfaces realistic
shading. The effect helps the object appear to have depth and helps to define the shape
better. It is a popular computation that is used in many 3D games.
- Clipping: This operation determines what part of an object is visible on the
screen and "clips out" any part that the user cannot see. This saves time since
the parts of objects that are off-screen are ignored.
- Lighting: Objects in the real world have their appearance shaped by the light
sources in the scene. Lighting effects cause color shading, light reflection, shadows and
other effects to be added to objects based on their position and the position of light
sources in the room. Light sources can be anything from an overhead light in an internal
room in an office (or castle) to the sun, moon, or even an explosion!
- Transparency: Some objects in the real world are transparent or semi-transparent.
Special calculations can be done to determine what objects are visible through a glass
door, for example.
- Texture Mapping: For realistic objects, it is necessary to overlay pictures on
them to give them texture. For example, most walls are not made of a solid, flat
substance. They are made of a material such as brick, wood or plaster, and they may have
pictures, tapestry or signs on them. Texture mapping allows objects to be made so that
they appear to have substance instead of being "flat". There are in fact several
different types of texture mapping that are used by various software and hardware.
- Dithering: This is an effect that is actually used in many different places,
including regular 2D graphics and also in printing. Dithering is the process of mixing a
small number of colors together in specific patterns to create the illusion of there being
a larger number of colors. For example, inkjet color printers use dithering to create a
wide spectrum of apparent color, even though each dot printed is only one of three (or
four) different real shades. In 3D, it is used largely to show more realistic color
without needing to increase the color depth of the image (which means more computation
time and more memory to store the graphics).
- Fogging: An effect used in outdoor scenes, fogging serves two purposes by
blurring objects that are in the distance. First, it helps to make the scene appear more
realistic. If you've ever looked at a mountain scene you know that in the distance,
objects to appear fuzzy due to atmospheric moisture. Second, fogging allows the 3D process
to be performed more quickly because those objects in the distance that are "fogged
out" can be computed more quickly since they are shown in less detail.
- Filtering: There are several types of filtering that can be applied to the image.
These are used to "clean up" the image and smooth out textures and shapes. In
particular, bilinear filtering is used when showing textures up close to remove the
"blocky" look that results from magnifying an object when showing it at the
front of a scene.
- Buffering: This isn't really a 3D operation like the others listed here, because
it isn't something that is done to the data. However, advanced 3D cards include
memory buffers that are used for various tasks during these complex calculations. The more
buffers the card has available, the more flexibility it has when doing advanced
operations. This is why 3D cards usually need more memory than would strictly be necessary
just to hold the screen image. Newer AGP
systems can use the system memory for this.
Performance Issues and Tradeoffs
Home - Search
- Topics - Up
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221212768.50/warc/CC-MAIN-20180817182657-20180817202657-00191.warc.gz
|
CC-MAIN-2018-34
| 6,351
| 82
|
https://soupday.github.io/cc_unity_tools/installation.html
|
code
|
Hosting at Github
The tool is hosted in separate Github repositories for the HDRP, URP and 3D (Built in render pipeline) render pipelines. It is intended to be installed into Unity 2020.3 or above for HDRP and URP pipelines or Unity 2019.4.11f1 or above for the Built in 3D pipeline using Unity’s internal package manager. The available repositories are shown below.
Only install one version of the tool at a time. Only use the package manager for installation.
To obtain most recent stable version follow the above link to the latest release, and download the source code.zip file from there.
Alternatively, you visit the appropriate repository to obtain the latest commit to the main branch by pressing the green ‘Code’ button and then ‘Download Zip’ from the dropdown window. Alternatively the code can be cloned directly from github via HTTPS using the ‘git URL’ which can be copied from the dropdown window (discussed later).
Installation from .zip file
Download and Unpack .zip file
Download the appropriate latest release or latest stable commit (from the code dropdown box). Unpack the .zip file into a safe + non volatile directory where you’ll be able to store the package files (7zip is a suitable tool for this, should you lack one).
You must store the package files in a safe place. You can make a directory inside your project directory (eg ‘<drive>:/~~/<project directory>/downloaded files’) if you wish.
DO NOT place the unzipped files into the /Assets /Packages /Library or /ProjectSettings directories of your project.
Install the Package From Disk
In Unity, open your project and navigate to the ‘Package Manager’ (via Window -> Package Manager).
Now click the ‘Add’ button.
And select ‘Add package from disk’.
Navigate to the place where you unpacked the .zip file and select package.json.
The package manager will now install the tool and will end up looking like this.
The tool is fully installed and ready to be used. Remember to keep the files you installed from safe.
Installation from git URL
Packages can be installed into Unity directly from a git repository.
The sole requirement for this is that Unity must be able to find the git.exe executable somewhere on your PATH.
If you require a git executable you can install git for windows (download the 64bit windows installer and accept all the default options when installing).
Please make sure you restart Unity and Unity Hub after installing git for windows otherwise you will encounter the following error.
Installing from Github
Open the Unity package manager Window -> Package Manager, click the add (+) button and select ‘Add package from git URL’.
Copy the URL from the green code dropdown box in the git repository.
Paste this into the package manager and click ‘Add’. The package manager will now install the tool.
Installation using the Github Desktop [Advanced]
Using the Github Desktop application to clone the repository is a very convenient means of keeping the addon installation up to date (it also allows easy switching between the main and dev branches if you want to test in-development features).
The addon can be installed into your Unity project(s) from wherever you have cloned the repository and it can be updated by ‘Fetching’ the latest changes without the need to uninstall/reinstall.
Download and install the application from the Github Desktop web page.
You won’t need an Github account to proceed.
Open the application and chose the menu option File -> Clone Repository…
Input the desired path to the repository and the local path where the cloned repository will be stored on your system.
The Repo names are either:
(a repository URL can also be used)
The application will clone the repository
Once complete, install the package as per Install the Package From Disk (open the Unity project then the package manager and select add package from disk. Navigate to the cloned repository and select the package.JSON file).
Keep the package up to date using the ‘Fetch Origin’ button.
If there any changes then ‘Pull’ them with the ‘Pull Origin’ button.
The HDRP and URP versions of the tool require minimum versions of the following packages to also be installed in your project.
10.5.0 or above.
10.5.0 or above.
Post Processing Package
Users of the 3D (built in render pipeline) and URP (universal render pipeline) should consider the optional installation of the post processing package from the Unity Registry. This will be utilized automatically to give excellent quality results comparable to Character Creator’s viewport.
To install the post processing package, go to the package manager window and use the ‘Packages’ dropdown to change the list show to ‘Unity Registry’.
Scroll down the list to find ‘Post Processing’. Select the item in the list and click install.
After installation the post processing stack will be added to the main camera and the custom settings automatically applied.
Should you wish to use the Alembic file format for baked physics geometry (and indeed the Alembic functions from this tool), then the Alembic package is also required from the ‘Unity Registry’.
Use the above method to navigate to the ‘Unity Registry’ and select the Alembic package and click install.
This will allow Unity to correctly import and animate Alembic files.
Open the Unity package manager (Window -> Package Manager) highlight the package that you wish to remove and click the remove button.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648245.63/warc/CC-MAIN-20230602003804-20230602033804-00217.warc.gz
|
CC-MAIN-2023-23
| 5,487
| 51
|
https://www.teanglann.ie/en/fgb/caintigh
|
code
|
IN FOCLÓIR GAEILGE—BÉARLA
caintigh1, v.t. & i. (vn. -iú m, gs. -ithe). 1. Speak (le, to). Ní chainteoinn le duine ar bith air, I wouldn’t mention it to anybody. Ní chainteodh sí liom, she wouldn’t speak to me, was not on speaking terms with me. 2. Address, accost. Chaintigh sé mé, he accosted me.
caintigh2, gsm. of cainteach.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100779.51/warc/CC-MAIN-20231208212357-20231209002357-00645.warc.gz
|
CC-MAIN-2023-50
| 341
| 3
|
https://favtutor.com/blogs/machine-learning-algorithms-for-beginners
|
code
|
As a human being can recognize faces and detect images using cognitive skills, with technological advancement, it is now even possible for machines to perform activities that a human can do and even more!
From the beginning of their lives, humans collect data and analyze it to find patterns, our brains are trained in this way to have cognitive skills and interpret data. Likewise, computers can be trained to find a pattern in the data and make appropriate predictions, this is called machine learning. Based on what kind of predictions the models make, there are different machine learning algorithms.
In this guide, we are going to discuss various types of machine learning algorithms for beginners. There are two types of datasets on which algorithms can be trained, one which has prediction or labeled data and another which has only raw data with no actual prediction values to train your model on. The former is a category of supervised learning where your models train on known predictions whereas the latter is unsupervised learning, where the model trains on undetected and unlabeled data. Let’s discuss these algorithms in detail!
Supervised learning is a category of machine learning algorithms where you have input factors (x) and an output variable (Y) and you utilize an algorithm to create a mapping from the input to the output variable.
Y = f(X)
The objective is to create an efficient and well-defined function that can create predictions as the output on inputting unseen data.
Supervised learning can be further structured into regression and classification problems.
- Regression: It is an algorithm that predicts a continuous real value. Eg. Predicting gold prices. There are many different types of regression algorithms. The three most common are listed below:
- Linear Regression
- Polynomial Regression
- Classification: It is an algorithm that predicts class. Eg. Predicting if the patient is suffering from heart disease or not. Classification problems can be solved with a numerous number of algorithms. Suitable algorithms can be chosen depending upon the data and the structure of the data. Here are a few popular classification algorithms:
- Logistic Regression
- K-Nearest Neighbor
- Support Vector Machines
- Naive Bayes
- Common supervised learning algorithms, which can be used for regression and classification algorithms problems:
- Decision Trees
- Random Forest
Regression problems are unique, as they anticipate that the model should output a real continuous value. For example, predicting stock prices, home loan prices, classifying if the website can be hacked, etc.
1) Linear Regression
In a simple linear regression algorithm, we create predictions from a single variable. The output attribute is known as the target variable and is alluded to as Y. The input parameter is known as the predictor variable and is alluded to as X. When we consider only one input parameter, the algorithm is known as simple linear regression.
“Linear Regression fits a linear model with coefficients w = (w1, …, wp) to minimize the residual sum of squares between the observed targets in the dataset, and the targets predicted by the linear approximation.”
After creating the test and training data, train the model using scikit learn library.
The sample plot for the training data following simple linear regression is:
2) Polynomial Regression
In polynomial regression, the input and output variables are mapped in the nth degree of the polynomial. Polynomial Regression doesn't need the connection between the input and output variables to be linear according to the data, this is the basic difference between Linear and Polynomial Regression.
The code below illustrates how you can train a polynomial regression model using python:
Plots for linear regression and polynomial regression:
3) Logistic regression
Logistic regression algorithms are used for classification problems. We use a logistic function to generate predictions that is why it is known as Logistic regression.
Another name for the logistic function is the sigmoid function, sigmoid function or activation function is used to convert the output into categorical discrete value. It’s an S-shaped curve that inputs a real-valued number and maps it into a value between 0 and 1. The logistic regression equation is:
1/(1 + e-value)
After creating and scaling the training and test dataset we can fit our training data to the model and create model predictions on the test data. And create a confusion matrix.
The decision boundary and scatter plot for the training data predicting if the user will purchase the commodity based on the data from social media advertisement, looks like this:
The model can overfit if the input parameters are highly correlated like linear regression, to cure this we can map pairwise correlations between inputs by removing the highly correlated inputs.
4) K-Nearest Neighbor algorithm
KNN is used for both regression and classification problems. However, it is vastly used for classification problems. K-nearest neighbors (KNN) algorithm uses ‘feature similarity’ to create the prediction values of new data points. This implies that new data points are assigned new values based on points in the training set. The working of the algorithm-
- Step 1: Creating training and test data.
- Step 2: We choose the value of K i.e. the nearest data points. K can be any integer. N− For each point in the test data do the following:
- 2.1: Calculate the distance between test data and each value of training data using any of the methods namely: Euclidean, Manhattan, or Hamming distance. Euclidean distance is the most common method.
- 2.2: Sort the distance in ascending order.
- 2.3: The algorithm chooses the top K rows from the sorted array.
- 2.4: It will assign a class to the test point based on the most frequent class of these rows.
The training data is fit into the KNN model and predictions are created using test data and create confusion matrix:
The plot of the training data and labels with decision boundary according to the KNN classification algorithm:
5) Support Vector machines - Kernel SVM
The aim of support vector machine algorithms is to find a hyperplane in an N-dimensional space where N is the number of features that are used to distinctly classify data.
There are many different methods or hyperplanes to separate the two classes of data points. Our aim is to find a hyperplane with a maximum distance between data points of both classes. Maximizing the margin distance enhances the efficiency of the model and predicts with more confidence.
The code to scale and fit the training data is:
The plot of the training data and labels with decision boundary according to the SVM classification algorithm:
6) Naïve Bayes Algorithm
Naïve Bayes Classifier is one of the straightforward and best Classification algorithms which helps in building the fast machine learning models which will make quick predictions.
It is a probabilistic classifier, which suggests it predicts the idea of the probability of an object. Some popular examples of the Naïve Bayes Algorithm are spam filters, sentimental analysis, and classifying articles.
The algorithm follows the following equation:
P(h|d) = (P(d|h) * P(h)) / P(d)
And this is how we fit the data to the naive Bayes model:
Naive Bayes is often extended to real-valued attributes, most ordinarily by assuming a normal distribution.
This extension of naive Bayes is named Gaussian Naive Bayes. The Gaussian (or Normal distribution) is the easiest method because we only need to estimate the mean and therefore the variance from our training data.
The plot of decision boundary for a gaussian naive Bayes algorithm on training data:
Common Supervised learning Algorithms
7) Decision Tree Algorithm
The decision tree as the name suggests works on the principle of conditions. It is efficient and has strong algorithms used for predictive analysis. It has mainly attributed that include internal nodes, branches, and a terminal node.
Every internal node holds a “test” on an attribute, branches hold the conclusion of the test and every leaf node means the class label. It is used for both classifications as well as regression which are both supervised learning algorithms. Decisions trees are extremely delicate to the information they are prepared on — little changes to the preparation set can bring about fundamentally different tree structures.
Trees answer consecutive roles that send us down a specific use of the tree given we have the answer. The model acts with "if this then that" conditions, at last, yielding a particular outcome. The code to fit the training data to the decision tree classification model:
The plot showing the decision boundary for the decision tree classification algorithm for the training data.
8) Random Forest Algorithm
Random forest, as its name suggests, comprises an enormous amount of individual decision trees that work as a group or as they say, an ensemble. Every individual decision tree in the random forest lets out a class prediction and the class with the most votes is considered as the model's prediction.
Random forest uses this by permitting every individual tree to randomly sample from the dataset with replacement, bringing about various trees. This is known as bagging. You can refer to a detailed tutorial on Random forest classifier by making a project on credit card fraud detection using machine learning.
Fitting the training data:
The plot for the training data for the random forest classification. You can notice that the plot somewhat looks like the plot for the decision tree but with better accuracy of the decision boundary.
An unsupervised learning algorithm is training a model on data that is neither classified nor labeled and allowing the algorithm to find patterns in the data without guidance. The algorithm groups the unsorted information according to patterns without any prior training of the model on any data.
Unlike supervised learning, no labels are provided in the data that means no training is done for the model. Therefore models are restricted to find the hidden structure in unlabeled data by our-self.
In this guide, we’ll discuss the two most prominent unsupervised learning algorithms, namely K-mean clustering and Principal component analysis.
9) K-means clustering Algorithm
K-Means Clustering is an Unsupervised Learning algorithm, which groups the unlabeled dataset into different clusters. Here K is the number of predefined clusters that are needed to train the model, as if K=2, there will be two clusters, and for K=3, there will be three clusters, and so on.
“It is an iterative algorithm that divides the unlabeled dataset into k different clusters in such a way that each dataset belongs to only one group that has similar properties.”
To choose the optimal number of clusters for the model, we use the elbow method:
- It trains the K-means clustering model on the given dataset with different K values (ranges from 1-10).
- For each value of K, calculate the WCSS value.
- The plot between calculated WCSS values and the number of clusters K.
- The steep bend or a point of the plot looks like an arm, that point is considered as the best value of K.
The WCSS curve looks like this:
From the curve, we deduce that the most suitable number of clusters (K) is 5. Hence the code to fit an unlabeled data by finding the number of clusters using the elbow method to a K-mean clustering model is:
The number of clusters and the data segmentation and centroid of the clusters created by the model is:
10) Principal Component Analysis
The main purpose of PCA is to decrease the complexity of the model. PCA simplifies the model and improves model performance. In cases of the datasets which have a lot of features we just extract much fewer independent variables that explain the variance the most.
The principal component analysis is used to extract linear composites of the observed variables. Factor analysis is basically a formal model predicting observed variables from theoretical latent factors from the dataset. We use PCA to maximize the total variance to find distinguishable patterns, and Factor analysis to maximize the shared variance for latent constructs or variables.
Reinforcement learning is used to make a sequence of decisions. The model learns to achieve a goal for an uncertain, potentially complex dataset. The concept of reinforcement learning is very similar to a game. The model uses a trial and error method to solve the problem. To achieve the goal the model gets either rewards or penalties for the actions it performs. Its primary goal of the model is to maximize the total reward.
There are two types of Reinforcement:
It is when an event occurs due to a particular behavior and resulting in an increase in the strength and the frequency of the behavior. Consequently, it has a positive effect on the behavior of the model.
Advantages of reinforcement learning are:
- Maximizes Performance
- Sustain Change for a long period of time
Disadvantages of reinforcement learning:
- Too much Reinforcement can lead to an overload of states which can diminish the results.
It is defined as the strengthening of behavior because a negative condition is stopped or avoided.
Advantages of reinforcement learning:
- Increases Behavior
- Provide defiance to a minimum standard of performance
Disadvantages of reinforcement learning:
- It facilitates enough to meet up the minimum behavior
Now that you know all about machine learning algorithms, you can start working with machine learning projects to apply your knowledge to real-world problems.
Machine learning is all about handling and processing the data and speculating the best machine learning algorithm to train your model to get optimal results. Python libraries like scikit-learn make it pretty easy to train your data without working out the actual mathematics behind the machine learning algorithm but understanding the algorithm to its core is what makes you a good data scientist.
We have covered 10 of the most prominent machine learning algorithms for beginners in this tutorial. Hope this article helps you create a clear understanding of the buzzword nowadays. That’s right Machine Learning.
Happy Learning :)
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474544.15/warc/CC-MAIN-20240224180245-20240224210245-00322.warc.gz
|
CC-MAIN-2024-10
| 14,307
| 105
|
https://communities.sas.com/t5/SAS-GRAPH-and-ODS-Graphics/Side-by-side-stacked-bar-and-lines/td-p/96032?nobounce
|
code
|
10-15-2012 09:36 PM
Hoping someone can point me in the right direction for this. I haven't been working with SAS for very long and have been using mostly SAS 8.2 TS2M0 and we also have SAS 9.2 TS2M0. I'm trying to create the chart below in SAS. I can't see to find how to create a side by side bar chart in SAS 9.2 TS2M0, I have it working for another program in SAS 8.2 using gchart and greplay, but like the looks of using GTL in SAS 9.2.
Any idea where I should start in either SAS 8.2 or 9.2? Remember it's the TS2M0 version.
THANKS in advance!!
10-16-2012 08:18 AM
Here's another alternative (maybe simpler than using greplay) using Proc Gchart to create the grouped/stacked bars, and annotate to draw the line & right-axis values.
(this particular data doesn't show 'stacked' bar segments, but gchart's subgroup= option does support that).
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814140.9/warc/CC-MAIN-20180222160706-20180222180706-00654.warc.gz
|
CC-MAIN-2018-09
| 845
| 7
|
http://mailman.alsa-project.org/pipermail/alsa-devel/2014-May/077233.html
|
code
|
[alsa-devel] [RFC] AVB - network-based soundcards in ALSA
clemens at ladisch.de
Wed May 28 15:12:57 CEST 2014
Henrik Austad wrote:
> On Tue, May 27, 2014 at 03:47:40PM +0200, Clemens Ladisch wrote:
>> Henrik Austad wrote:
>>> As to moving samples from the buffer onto the network, one approach would
>>> be to wrap a set of samples and place it into a ready frame with headers
>>> and bits set and leave it in a buffer for the network layer to pick up.
>>> The exact method here is not clear to me yet, I need to experiment, and
>>> probably send something off to the networking guys. But before I do that,
>>> I'd like to have a reasonable sane idea of how ALSA should handle this.
>> ALSA expects that the sound card hardware fetches samples whenever it
>> needs them.
> Right, that's what I thought. Is it correct to assume that _all_
> soundcards do this? I.e. no polled memory ops here, only DMA?
All _real_ sound cards use DMA. As for the rest, I don't want to talk
about them. ;-)
>> For USB and FireWire, there is a short queue of packets; the driver
>> appends new packets whenever a bunch of older packets has been completed
>> (as reported by an interrupt).
> Yes, that is what I thought was happening. I was then hoping to do
> something similar with AVB, just with the networking part instead. So
> the net subsystem would act as a hardware device to ALSA and provide a
> wakeup to the snd_media_driver once it is done.
> On a regular PCI soundcard, I had the impression that it would also
> fetch the samples whenever it needed them (you only mention USB and
> Firewire). Is this correct, or is PCI a whole different ballpark?
I mentioned USB and FireWire because these buses require that samples
are sent wrapped inside packets, which implies that the hardware cannot
access the samples in ALSA's ring buffer directly. (Actually, this
would be posibble with a flexible enough scatter/gather support, but
this has not been implemented yet.)
Regular PCI sound cards typically get told the location of the ring
buffer in memory, and then do everything by themselves. (The driver
then does not need to do anything, except reporting the current position
in the buffer to userspace. This is where disabling period wakeups
would make sense.)
>>> The process of evening out the rate of samples is what traffic shaping and
>>> stream reservation will help you do (or enforce, ymmv), to some extent at
>>> least. The credit based shaper algorithm is designed to force bursty
>>> traffic into a steady stream.
>> In the case of USB and FireWire, the hardware already knows to send
>> isochronous packets at a rate of 8 kHz.
> Yes, that is true.
>> A 'normal' NIC wouldn't be able to do this. Are there NICs that have
>> a separate queue for isochronous packets? Or how else can this be
> As I said in another email, I've only found i210 with support for AVB at
> the moment.
Wikipedia also mentions XMOS and Marvell 88E8059.
> For SR queues with the time based element enabled a queue is only
> eligible for arbitration if the fetch time of the up coming packet has
> been reached.
This is exactly what I meant.
> It also means that avb_media_driver needs to have some awareness over actual
> network hardware.
Implementing AVB (802.1Qav) is not possible without hardware support, so
of course this needs some new inteface to the hardware driver(s).
Hmmm, what about <https://github.com/AVnu/Open-AVB>?
More information about the Alsa-devel
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867364.94/warc/CC-MAIN-20180625014226-20180625034226-00636.warc.gz
|
CC-MAIN-2018-26
| 3,444
| 60
|
https://community.intel.com/t5/FPGA-Wiki/NEEK-Video-Design-Example/ta-p/735453
|
code
|
This is a hardware design example for the Altera NEEK development board. This is the hardware design example called "video" which was distributed in the 8.0.1 release of the NEEK development platform CD. This design has been updated so that it can be compiled under the 9.0sp1 Altera tools.
An attempt was made to not make any changes to the design at all, but for simplicity the El Camino SD card controller was removed from the design. Additionally the SDC constraints file for this design was reworked to more appropriately constrain the design in the 9.0sp1 environment. Other than these modifications the design should be unchanged.
Please see the NEEK 8.0.1 release CD and documentation for more information on this design.
This example is published in a minimal source form only. You will need to run the build script to create a full project representation of the design.
Download the archives you are interested in and place them in a directory on your system that does not include spaces in the path name. The entire path name of this directory must not contain spaces, so on Windows systems you should avoid putting these in the "My Documents" folder, or on your "Desktop" since these locations are subdirectories of the "Documents and Settings" path, and that would mean that these locations inherit the spaces in that part of the path name.
In order to extract the archives after downloading them, it is recommended that you run the "tar -xzf <filename>" command from a bash shell. For linux users you should have ready access to a bash shell. For windows users, you may need to install the Altera development tools to gain access to a bash shell. On Windows it is recommended that you install the Altera Quartus II FPGA development tools along with the IP base suite as well as the Nios II EDS development tools. Once these tool chains are properly installed on your workstation, you can launch a bash shell by running:
"Start -> Programs -> Altera -> Nios II EDS 9.0 -> Nios II 9.0 Command Shell"
Once you are in the bash shell, you can "cd" into the directory containing the archives that you've downloaded, and running the following command to extract them:
tar -xzf <archive_filename>
Note that if you use some other archiving software to extract these archives, like WinZip, you may loose the execution privileges on some of the shell scripts within the archives that are used to perform various activities associated with building and using the example. If this happens, you can restore execute privileges from within a bash shell with the command "chmod +x <filename>". It is recommended that you avoid this situation by using "tar" to extract the archives from within a bash shell and avoid using any Windows oriented archive utilities with these archives.
Building the example
After you have extracted the archive you should be able to locate the shell script "create-this-hardware" within the "build_scripts" subdirectory of the archive directory. In the bash shell, "cd" into this directory and run this script like this:
This script should run and fully create the example design without any errors, however, there is no guarantee that this particular build will meet timing. A timing check is run at the end of the build script and a PASS / FAIL indication should be printed to the console regarding timing closure. If your timing closure fails, there are three things that you can manipulate to remedy this, first and most difficult is the development platform, different versions of Linux can produce different results, and Windows will produce different results from Linux, however, most of us don't have multiple development platforms to throw at any given build to change the timing results. Second, is the Quartus II placement seed, this is rather easy to change in Quartus, and then recompiling the design will tell you if this new seed has been more successful or not. Third, you can change the source code, in this example design this is not very difficult, since the design contains a system ID peripheral, regenerating the SOPC Builder system will alter the timestamp constant in this peripheral and produce a source code change. So you could re-generate the SOPC Builder system and then recompile the design in Quartus and see where that takes your timing closure. Another way to achieve this third option is to simply rerun the "create-this-hardware" script from scratch as this will essentially accomplish option 3 from above, and it is this very action that makes the timing closure results unpredictable from build to build.
The "create-this-hardware" is a simple bash shell script that invokes other shell scripts and TCL scripts to create the example design. Please examine these scripts to get a better understanding of how the system is built. Once you have built the system, you can examine the resultant Quartus project for information about how the system was constructed and any other details about the SOPC system and Quartus project.
This is a hardware design example for the Altera NEEK development board.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00637.warc.gz
|
CC-MAIN-2022-40
| 5,059
| 15
|
https://www.vulcannetworkau.com/page/16-gmod-css-missing-textures/
|
code
|
Gmod CSS Missing Textures
If you are here you are more than likely looking for a way to fix your Counter-Strike: Source texture and model related errors.
• First: Make sure that you have downloaded and installed WinRAR so that you can extract the contents of the zip folder.
• Second: Download the CS:S Textures and go to the downloaded compressed file which should be called CSSGameContent. Next right click the folder, and select "Extract to CSSGameContent /", doing this will create a seperate folder.
• Third: Now go to your Steam Library. Right Click "Garry's Mod" and go to Properties --> Local Files --> Browse Local Files. Enter the "garrysmod" folder, than open the "addons" Folder.
• Fourth: Open the contents of the decompiled folder called CSSGameContent that you previously extracted, you should see a folder called CSS_Game_Content.
• Fifth: Now just drag the entire folder called CSS_Game_Content into your addons folder and you will be all set AFTER you close and reopen your Garry's Mod client!
Below will be a video providing a step by step process of how to do what was explained above.
If you have issue with the steps above please refer to this video.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711200.6/warc/CC-MAIN-20221207153419-20221207183419-00368.warc.gz
|
CC-MAIN-2022-49
| 1,182
| 9
|
http://felixznvxy.dbblog.net/9212875/the-greatest-guide-to-computer-science-project-help
|
code
|
^ In 1851 ^ "The introduction of punched playing cards to the new engine was vital not simply as a far more effortless form of Command as opposed to drums, or for the reason that courses could now be of limitless extent, and will be stored and repeated with no danger of introducing problems in setting the device by hand; it absolutely was essential also since it served to crystallize Babbage's experience that he had invented a thing genuinely new, a thing A lot much more than a complicated calculating machine." Bruce Collier, 1970 ^ See the entry
This job is mainly focused on centered on catastrophe recovery after a disaster with computer units. Normal responsibilities / capabilities: establish strategies for catastrophe prevention and for resuming functions; make sure backup of knowledge to the Corporation (procedure-sensible); style and design and implement computer methods that can aid continuous operations; connect with vendors when important; structure and examination Restoration designs; report threat opportunity to senior management.
We involve your e-mail address in order that we can send out you an e mail notify once the tutor responds towards your information.
Need to know more details on Occupations in earth and Actual physical sciences? Search through comprehensive info on dozens of Occupations to find out what experts, engineers, together with other STEM gurus truly do and what it's going to take to prepare for these careers.
Software Highlights Numerous college students at UW take part in faculty analysis, internships, co-ops and focused analyze overseas journeys connected to their main.
There are many ways you will get help with all your homework and assignments, and it helps to grasp what exactly is around and what you may need so as to make your quest for a tutor more effective.
That is a standard complex supervisor role and in certain corporations, this title his comment is here can incorporate other managerial responsibilities which include overseeing networks, managing network engineers, databases, databases analysts and builders and much more. Typical obligations / skills: manage help desk/ technological help groups for equally internal and exterior users; spending budget for assist staff equipment and software; be involved with company options for hardware and computer software updates; outline company phone techniques and guidelines and keep an eye on staff conduct on phone calls; make sure the updating of related documentation. The purpose normally requires sector-linked technological expertise and will need physical work.
This weeklong summer season camp is made to help girls see that STEM topics is often enjoyable and meaningful, starting off at a youthful age.
Normal duties / expertise: review wireless networking and communication needs; style and establish network infrastructure; ability organizing; endorse procedure advancements; document important processes; establish any necessary program like drivers; monitor programs use and effectiveness; setup and operate wireless community assessments. A senior placement could lead a staff of junior and intermediate engineers.
So, at this point bonus becomes the supply of drive for the staff. This is certainly extrinsic incentives. It in essence involves price or cash in the form of discounted, reward, sale, reward, and so on. Intrinsic incentives is The interior sensation of fulfillment for that get the job done. Consider an example: In the event your get the job done provide positive modify in the sphere you might be used, you are going to come to feel superior and happy. This is certainly intrinsic incentives that motivates you to operate more durable.
Must connect with department managers on IT requirements; include comments from both equally interior and exterior users into organization necessities documents; integrate feedback from designers; contribute specialized prerequisites; advise technical teams on their and their engineering’s function in the Firm; deliver assistance to programmer / developers with use scenarios.
g., if in a very divisional office); you could try these out draft a security breech avoidance program; define audit techniques; report audit conclusions. This position is a lot more very likely to require a history in MIS (Administration Information Science) or small business administration, even though it capabilities are precious.
Not the answer You are looking for? Look through other thoughts tagged python numpy or inquire your individual dilemma. asked
This posture may require practical experience with distinct 3rd-get together purposes, and sometimes overlaps with helpful resources Database Developer obligations.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583705091.62/warc/CC-MAIN-20190120082608-20190120104608-00362.warc.gz
|
CC-MAIN-2019-04
| 4,699
| 14
|
http://bitbootcamp.com/Spark.html
|
code
|
Course Summary: Advanced Data Analytics with Spark
Our Advanced Data Analytics with Spark cohort is a 4 week evening course.
Apache Spark is a fast and general engine for large-scale data processing.
Spark was developed as an alternative to the traditional MapReduce processing paradigm. By using in
memory storage, Spark can achieve up to 100X the speed of Hadoop MapReduce and is 10X faster when
running on disk. Spark is preferred for iterative processing, which is being done by many machine
Sparks runs on top of Hadoop, as a standalone platform or in the cloud. It is easy to use, fast and has a
powerful stack of libraries including SQL and Dataframes. Our course will require that you have some experience
programming in python.
Course Details: Advanced Data Analytics with Spark
Week 1 : Spark Fundamentals
Week 2 : Spark SQL
- C: Introduction to Spark
- C: Why Spark?
- C: Introduction to RDDs
- C: Data sharing
- C: Data Partitioning
Week 3 : Spark Streaming
- C: Working with the Spark Shell
- C: What is Spark SQL?
- C: Spark SQL vs Spark Core
- C: DataFrames API
Learning Objectives: Advanced Data Analytics with Spark
- C: DStreams
- C: Transformations: Stateless and Stateful Transformation
- C: Checkpointing and Output Operations
- C: Tuning and Debugging Spark
- Become familiar with Spark fundamentals. Learn about the different components of Spark.
- Use Spark on a HDFS cluster. Gain experience working with RDDs.
- Learn how to tune and debug Spark.
- Tools used : Python, Spark
- Drop us a note, to schedule an interview, and see if this course is a good fit for
- January 10th, 2017 - February 2nd, 2017
Tuesday and Thursday: 6:30 PM to 9:30 PM
Financing Options available with:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499919.70/warc/CC-MAIN-20230201081311-20230201111311-00713.warc.gz
|
CC-MAIN-2023-06
| 1,703
| 35
|
http://forum.luminous-landscape.com/index.php?topic=34707.msg285043
|
code
|
I used to use John MacLean
for all my profiling needs when I lived in CA. He is prompt and has good pricing. I never liked using the big guys; I always felt I was just a number or an account code with them.
I guess there is no good reason why I shouldn't still be using him except I'm on a different continent now.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066275.44/warc/CC-MAIN-20150827025426-00073-ip-10-171-96-226.ec2.internal.warc.gz
|
CC-MAIN-2015-35
| 314
| 3
|
https://community.smartthings.com/t/poor-mans-garage-door-opener-help/117357?page=2
|
code
|
Not sure if will give you tight enough control but I have been using the Power Allowance feature to turn off an outlet after a set period of time - set in minutes. One app turns on an outlet that controls a space heater if my garage is too cold, and the power allowance turns it off after 45 minutes.
Unfortunately I don’t think the native ‘power allowance’ app will not let you input less than a minute.
Rob is wanting 20-30 seconds
Btw… I have an app for your garage heater too
(Assuming you have some kind of temperature sensor in there)
I know this is already answered using a software solution but for a hardware solution you can use a “one shot” relay. In essence when it gets triggered it will trigger its own output for xx seconds then turn it off even if power is still being applied. Then when power is turned off it resets so it can do it again. The other hardware solution would be using Konnected or ST_Anything since they both support a momentary output with selectable time.
I have updated the files on GitHub as promised (not promising that it will work on Android though!)
There is one thing to confirm…
When you click on the Github Link for the file
you MUST click on ‘RAW’ before you copy the code.
This will open in a full page browser. - CTRL + A to select all then CTRL + C to copy.
This can then be pasted into the IDE (the same as last time)
Hopefully this should work OK now…
I just completed a similar project using an Inovelli dual outlet module. However, I opened and modified it such that I can now control 2 garage doors and get power through on the outlets (they are not controlled any more).
Then I created 2 Simulated Momentary Tile for ST control and a Webcore piston to make it work.
It also works on Presence.
yup, works perfectly now Andy, thank you very very much for your quick solution to my little project. You are awesome!
This is an awesome community…its greatest strength is not the HW, SW, FW, etc., it’s the helpful people here like you, Andy.
Link to the relay you are using?
You are welcome Rob.
Glad we finally got it to work for you…
What you want now is for it to open when you get home and close when you leave
lol, maybe so. But I’ve been fooling around getting my Vivint panel connected to ST. I’ve got it linked through Google Assistant, so I can voice command the door to open ! and arming / unlocking through GA now.
Just have to fiddle with getting a Arming Away action on my ST
Thank you for your kind comments Dana
I was helped a lot when I first got my hub so it only seems right to give back if I can
Rob are you using SHM or do you just set normal ‘modes’?
sorry Andy, I’m not sure exactly what it is your asking (still new to all of this terminology and technology).
Are you asking if I’ve got the Vivint panel to appear on my Smart Home Monitor panel?
If so, then no. I;m just able to control some of it through Google Assistant
I echo Dana’s comments Andy.
I see that this community actively supports everyone with very helpful and knowledgeable replies from experts like you .
Beginners like myself greatly appreciate your help and your patience.
I just googled the panel… now I understand why you were talking about ‘arming away’ I thought you were talking about smartthings SHM (Smart Home Monitor) which also has an armed/away mode.
Just ignore me… I’ll go away now
Rob to be honest, in the big picture, I’m a real beginner too (but I can do the simple stuff)
I’ve had my hub for around a year and have picked up a bit of coding along the way, but im not anywhere in the league of some of the guys here… still learning though.
I decided at the beginning that I wasn’t going to use core or webcore but try and learn the code as a bit of a hobby.
Well I’ve pretty much got my system running how I want it so not much to do now
(even my very tolerant wife is reasonably happy)
well that’s something to look forward to for my journey. If I can make it easy and painless for my wife to use it, then I’ll consider it whatever the final product to be as a success too !
As for the programming side, I tip my hat to you and anyone else that learned how to put it together and make it run. After you signed off last night, I looked into the tutorial on ST page going through Groovy. And I’ve gotta be honest with you, I made a couple of attempts at understanding and learning some type of coding and it has never clicked with me, maybe I’m never going to get it to click in my head, so its great that you have made that connection and get it.
Maybe I’ll just have to get you to make all of my apps in the future ! LOL
probably the 1st real thing to get her on my side was the house announcing:
“I thought you might like to know,that the washing machine has finished its cycle”
When she heard this for the 1st time the look on her face was priceless (she did roll her eyes afterwards though)
I’m lucky that we have both been in IT for more than 20 years and she is almost as much a gadget freak as I am.
BTW (any folks reading this)
Please don’t think I’m being sexist regarding the washing (laundry) As I work from home most of the time I do it!
As for making your future apps… I’ve probably already got them somewhere
Having seen some of @Danabw posts in the forum (I note she is also very helpful) I believe she uses webcore for most of her automations and announcements.
I’m sure Dana would be able to give her thoughts on using this instead of coding smartapps
OMG, I need that washing machine app !! lol, I’m sure my wife would roll her eyes too
Thanks! I’ll check it out.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00429.warc.gz
|
CC-MAIN-2022-33
| 5,628
| 54
|
http://zyggywebs.com/
|
code
|
I am a freelance web designer based out of the midwestern USA.
I entered into the world of web design almost by accident. It started with a bit of basic coding on forums, a bit of poking around in source code. I soon fell in love with it, and have been learning ever since.
Web design is in high demand these days, and rates are often steep. My goal is to do quality work at reasonable prices, and deliver the website that you want.
I am proficient in HTML and CSS, and I have a fairly strong grasp of most scripting languages and databases, as well as the use of various software. My services range from logo design to building entire websites. My intention as a web designer is to create fully-functional websites that are clean, efficient, and quick to load. I have 5+ years of experience.
If you would like some examples of my work, please check out the Showcase page. I designed this website entirely from raw code.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540491871.35/warc/CC-MAIN-20191207005439-20191207033439-00052.warc.gz
|
CC-MAIN-2019-51
| 920
| 5
|
https://www.lynnsmithphotography.com/
|
code
|
Thanks for stopping by, this is my offical website. All pictures you see here are under my copyrights except for the murals which are under copyrights by the individual artists.
If you are here, it's because we are either friends, you've seen my photos on Facebook or we've had the opportunity to do photo workshops and/or field trips, I've decided this is a more personal way of showing my photos. This site is a work in progress, photos are not permanent nor are the categories, so please stop by often and see what's new.
This is not a public site, so please don't post this on any social media platforms but feel free to tell yours friends. And if you like what see you can drop me a note via email() and tell me what you think.
This website was constructed by Galen Sterling-Smith, my son, he can do the same for you if you need a website.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610196.46/warc/CC-MAIN-20210613161945-20210613191945-00430.warc.gz
|
CC-MAIN-2021-25
| 844
| 4
|
http://help.rubygems.org/discussions/suggestions/3311-rsync-mirrors
|
code
|
or Create a profile
05 Apr, 2013 07:30 PM
I've read several blogs & threads about running gem mirror
following by generate_index but even after provisioning a 16GB VM
with huge swap & waiting 2 days I still cannot get
generate_index to complete successfully on the entire rubygems
mirror. Is possible that I could become a mirror partner with
rsync? Do you have the gems & indices built with some private
rsync mechanism as I've seen hinted at that I could become a
partner of? I need to provide a mirror to offline networks &
this is really painful.
Things like CPAN, pypy, nodejs, cywin, etc have convenient
functional mirror capabilities & the best one to date is
definitely rsync. It would be great if we could use rsync or get
guidance on how the hell to actually build the index for these
on 09 Apr, 2013 08:14 PM
on 08 May, 2013 12:15 AM
Agreed. Check out this thread:
We really just need people who are excited and passionate about
making sure mirroring works to help make it work and stick around
Formatting help /
(switch to plain text)
(switch to Markdown)
You can attach files up to 10MB
If you don't have an account yet, we need to confirm you're human and not a machine trying to post spam.
A conversation has been started with the RubyGems.org staff to resolve this discussion.
This discussion is private.
Only you and RubyGems.org support staff can see and reply to it.
This discussion is public. Everyone can see and reply to it.
You can use Command ⌘ instead of Control ^ on Mac
Powered by Tender™.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400372851.33/warc/CC-MAIN-20141119123252-00118-ip-10-235-23-156.ec2.internal.warc.gz
|
CC-MAIN-2014-49
| 1,520
| 31
|
https://www.softpile.com/btrace/
|
code
|
Free and dynamic safe tracing system for the Java platform
Version: 1.0BTrace is a free, safe, and dynamic tracing tool for the Java platform. BTrace can be used to dynamically trace a running Java program (similar to DTrace for OpenSolaris applications and OS).
Operating System: Mac OS X
BTrace dynamically instruments the classes of the target application to inject tracing code ("bytecode tracing"). Tracing code is expressed in Java programming language. There is also integration with DTrace for the OpenSolaris platform.
NOTE: BTrace is released and licensed under the terms of the GNU Public License v.2 w/Classpath Exception.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703544403.51/warc/CC-MAIN-20210124013637-20210124043637-00403.warc.gz
|
CC-MAIN-2021-04
| 634
| 5
|
http://cloudcomputing.sys-con.com/node/2316718
|
code
|
|By Larry Carvalho||
|July 21, 2012 12:00 PM EDT||
At the recent Red Hat summit in Boston the company took the unusual step of combining financial analysts and industry analysts in the same meeting. It was interesting to see CFO Charles E. Peters Jr. on stage dressed in Wall Street attire talking financials, and CEO Jim Whitehurst in jeans and red shoes talking strategy. Very few large enterprises have similar diversity among top executives, showing a unique corporate culture at Red Hat.
Whitehurst presented Red Hat’s three top priorities: increasing revenues, community development and decreasing costs. It was a refreshing change from other technology vendors who seem primarily focused on cost cutting. Focusing on the community provides Red Hat a unique opportunity to "crowd source" innovation with active participation of customers & partners in the product development process. This approach also benefits customers, assuring them that they have an influence when product features are prioritized.
Like many technology conferences, Red Hat’s included a significant interest in cloud computing. As a result, the exhibit area featuring the company’s cloud products was very busy. Customers were interested to see how Red Hat’s cloud computing strategy augmented their existing middleware products. The company’s cloud products include OpenShift (PaaS), CloudForms (Hybrid Cloud Management), Red Hat Storage (hybrid cloud storage from the Gluster acquisition) and OpenStack (an IaaS offering still to be released). When combined with Jboss, Linux, messaging and virtualization, Red Hat provides a comprehensive platform on which enterprises can develop cloud infrastructures. At present, very few vendors can offer such an end-to-end solution.
Why are Red Hat and open source so relevant in cloud computing? In recent years, many enterprises have had bad experience by being locked-in to ERP and database platform investments. Security is another roadblock to cloud adoption by enterprises. Here are a couple of relevant examples: Sprint migrated to Red Hat’s open source application server from a proprietary solution, while significantly reducing their expense. Customer ERP investments locked-in to current vendors face significant challenges to migrate off existing products. The adoption of on-premise HR applications and associated migration costs make it hard for CIOs to justify the leap into a cloud delivered solution like Workday.
Cloud computing’s success has been built on open source as proven by Amazon, Google and Facebook. A number of business cases would not have been viable using proprietary software. The availability of open source options makes a unique value proposition possible for start-ups. These successes in the cloud are now being increasingly replicated by enterprises that contributed to Red Hat’s revenue crossing $1B in the past fiscal year, demonstrating its momentum. The opportunity for Red Hat is to take its cloud computing stack and make adoption a no-brainer for customers already using its existing middleware stack.
At the same time, Red Hat has some unique challenges:
- Any new platform needs to have strong developer support. Red Hat will need significant investment to attract developers to their OpenShift platform, without which they risk slow adoption.
- While there is increasing movement to simplify IT investments by integrating hardware with software platforms, Red Hat lacks hardware in their portfolio.
- Proprietary software vendors are jumping into open source with a ―me too‖ mentality, with several vendors announcing support of new languages and open frameworks. This could lead to diluting the value of traditional open source platforms like Red Hat.
Cloud computing is evolving at breakneck speed, largely due to a community driven open architecture that embraces multiple languages and frameworks. As a result, technology companies are in a race to be open, while Red Hat has the unique culture of having been "born" open. Time will tell how large Red Hat can grow in enterprise markets, especially in the cloud computing space, building on the momentum of customers defecting from proprietary software.
SYS-CON Events announced today that Interface Masters Technologies, a leader in Network Visibility and Uptime Solutions, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Interface Masters Technologies is a leading vendor in the network monitoring and high speed networking markets. Based in the heart of Silicon Valley, Interface Masters' expertise lies in Gigabit, 10 Gigabit and 40 Gigabit Eth...
Oct. 26, 2016 06:45 AM EDT Reads: 3,382
As the world moves toward more DevOps and Microservices, application deployment to the cloud ought to become a lot simpler. The Microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. Serverless computing is revolutionizing computing. In his session at 19th Cloud Expo, Raghav...
Oct. 26, 2016 06:15 AM EDT Reads: 2,063
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and microservices. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your conta...
Oct. 26, 2016 06:00 AM EDT Reads: 1,544
If you had a chance to enter on the ground level of the largest e-commerce market in the world – would you? China is the world’s most populated country with the second largest economy and the world’s fastest growing market. It is estimated that by 2018 the Chinese market will be reaching over $30 billion in gaming revenue alone. Admittedly for a foreign company, doing business in China can be challenging. Often changing laws, administrative regulations and the often inscrutable Chinese Interne...
Oct. 26, 2016 06:00 AM EDT Reads: 1,404
Fact: storage performance problems have only gotten more complicated, as applications not only have become largely virtualized, but also have moved to cloud-based infrastructures. Storage performance in virtualized environments isn’t just about IOPS anymore. Instead, you need to guarantee performance for individual VMs, helping applications maintain performance as the number of VMs continues to go up in real time. In his session at Cloud Expo, Dhiraj Sehgal, Product and Marketing at Tintri, wil...
Oct. 26, 2016 06:00 AM EDT Reads: 1,868
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
Oct. 26, 2016 06:00 AM EDT Reads: 2,038
Enterprise IT has been in the era of Hybrid Cloud for some time now. But it seems most conversations about Hybrid are focused on integrating AWS, Microsoft Azure, or Google ECM into existing on-premises systems. Where is all the Private Cloud? What do technology providers need to do to make their offerings more compelling? How should enterprise IT executives and buyers define their focus, needs, and roadmap, and communicate that clearly to the providers?
Oct. 26, 2016 05:45 AM EDT Reads: 2,553
SYS-CON Events announced today that Streamlyzer will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Streamlyzer is a powerful analytics for video streaming service that enables video streaming providers to monitor and analyze QoE (Quality-of-Experience) from end-user devices in real time.
Oct. 26, 2016 05:30 AM EDT Reads: 1,039
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
Oct. 26, 2016 05:30 AM EDT Reads: 1,019
SYS-CON Events announced today that Pulzze Systems will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Pulzze Systems, Inc. provides infrastructure products for the Internet of Things to enable any connected device and system to carry out matched operations without programming. For more information, visit http://www.pulzzesystems.com.
Oct. 26, 2016 05:00 AM EDT Reads: 2,576
Cloud based infrastructure deployment is becoming more and more appealing to customers, from Fortune 500 companies to SMEs due to its pay-as-you-go model. Enterprise storage vendors are able to reach out to these customers by integrating in cloud based deployments; this needs adaptability and interoperability of the products confirming to cloud standards such as OpenStack, CloudStack, or Azure. As compared to off the shelf commodity storage, enterprise storages by its reliability, high-availabil...
Oct. 26, 2016 04:30 AM EDT Reads: 1,193
One of biggest questions about Big Data is “How do we harness all that information for business use quickly and effectively?” Geographic Information Systems (GIS) or spatial technology is about more than making maps, but adding critical context and meaning to data of all types, coming from all different channels – even sensors. In his session at @ThingsExpo, William (Bill) Meehan, director of utility solutions for Esri, will take a closer look at the current state of spatial technology and ar...
Oct. 26, 2016 04:30 AM EDT Reads: 1,759
The IoT industry is now at a crossroads, between the fast-paced innovation of technologies and the pending mass adoption by global enterprises. The complexity of combining rapidly evolving technologies and the need to establish practices for market acceleration pose a strong challenge to global enterprises as well as IoT vendors. In his session at @ThingsExpo, Clark Smith, senior product manager for Numerex, will discuss how Numerex, as an experienced, established IoT provider, has embraced a ...
Oct. 26, 2016 04:15 AM EDT Reads: 1,170
In past @ThingsExpo presentations, Joseph di Paolantonio has explored how various Internet of Things (IoT) and data management and analytics (DMA) solution spaces will come together as sensor analytics ecosystems. This year, in his session at @ThingsExpo, Joseph di Paolantonio from DataArchon, will be adding the numerous Transportation areas, from autonomous vehicles to “Uber for containers.” While IoT data in any one area of Transportation will have a huge impact in that area, combining sensor...
Oct. 26, 2016 03:45 AM EDT Reads: 1,024
“Media Sponsor” of SYS-CON's 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. CloudBerry Backup is a leading cross-platform cloud backup and disaster recovery solution integrated with major public cloud services, such as Amazon Web Services, Microsoft Azure and Google Cloud Platform.
Oct. 26, 2016 02:30 AM EDT Reads: 1,471
In the next forty months – just over three years – businesses will undergo extraordinary changes. The exponential growth of digitization and machine learning will see a step function change in how businesses create value, satisfy customers, and outperform their competition. In the next forty months companies will take the actions that will see them get to the next level of the game called Capitalism. Or they won’t – game over. The winners of today and tomorrow think differently, follow different...
Oct. 26, 2016 02:30 AM EDT Reads: 1,064
@DevOpsSummit has been named the ‘Top DevOps Influencer' by iTrend. iTrend processes millions of conversations, tweets, interactions, news articles, press releases, blog posts - and extract meaning form them and analyzes mobile and desktop software platforms used to communicate, various metadata (such as geo location), and automation tools. In overall placement, @DevOpsSummit ranked as the number one ‘DevOps Influencer' followed by @CloudExpo at third, and @MicroservicesE at 24th.
Oct. 26, 2016 02:30 AM EDT Reads: 4,093
Ask someone to architect an Internet of Things (IoT) solution and you are guaranteed to see a reference to the cloud. This would lead you to believe that IoT requires the cloud to exist. However, there are many IoT use cases where the cloud is not feasible or desirable. In his session at @ThingsExpo, Dave McCarthy, Director of Products at Bsquare Corporation, will discuss the strategies that exist to extend intelligence directly to IoT devices and sensors, freeing them from the constraints of ...
Oct. 26, 2016 01:15 AM EDT Reads: 3,126
So you think you are a DevOps warrior, huh? Put your money (not really, it’s free) where your metrics are and prove it by taking The Ultimate DevOps Geek Quiz Challenge, sponsored by DevOps Summit. Battle through the set of tough questions created by industry thought leaders to earn your bragging rights and win some cool prizes.
Oct. 26, 2016 12:15 AM EDT Reads: 4,136
Fifty billion connected devices and still no winning protocols standards. HTTP, WebSockets, MQTT, and CoAP seem to be leading in the IoT protocol race at the moment but many more protocols are getting introduced on a regular basis. Each protocol has its pros and cons depending on the nature of the communications. Does there really need to be only one protocol to rule them all? Of course not. In his session at @ThingsExpo, Chris Matthieu, co-founder and CTO of Octoblu, walk you through how Oct...
Oct. 26, 2016 12:00 AM EDT Reads: 3,194
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720941.32/warc/CC-MAIN-20161020183840-00163-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 14,551
| 52
|
https://bugzilla.redhat.com/show_bug.cgi?id=848903
|
code
|
Red Hat Bugzilla – Bug 848903
Any chance of moving /etc/my.cnf to /etc/mysql/my.cnf?
Last modified: 2012-08-20 10:30:24 EDT
Trying to build containes with a shared /etc, having configuration data in the /etc dir, gets very difficult. If you left a symbolic link from /etc/my.cnf to /etc/mysql/my.cnf, that would give you back wards compatability.
The container code then could bind mount a chroot /etc/mysql over the real /etc/mysql and allow processes within the container to write their config.
Hm, not sure about that. The scenario that worries me is that somebody edits /etc/my.cnf and his editor moves the symlink to /etc/my.cnf~ and writes a plain file at /etc/my.cnf. Emacs seems to know enough to not do that, but I don't have a lot of confidence that all other editors do too.
In general, we do not expect mysql-related processes to ever write /etc/my.cnf: it's only meant to be edited by humans. So I'm not sure exactly what scenario you're trying to cater to?
Well as I continued to play with containers, I was able to get mysql to work without having to make this change. I think we can close this for now. I am not sure what was complaining about. If it starts happening in the F18 with containers, we can look at this again.
Currently I have all containers sharing the same /etc/my.cnf.
Tom, do you know if postgresql will set itself up on the first service start like mysql does? I am planning on attempting to create/start postgresql within a container.
(In reply to comment #2)
> Tom, do you know if postgresql will set itself up on the first service start
> like mysql does? I am planning on attempting to create/start postgresql
> within a container.
It will not --- you need to issue an explicit database initialization command first. Either "service postgresql initdb" or "postgresql-setup initdb" should do in the latest packages; before that you need one or the other depending on how old the package is.
The lack of an auto-initdb on first start is intentional, based on upstream practice and recommendation. Many years ago, our initscript did have auto-initdb behavior, but then there were some high-profile cases of people losing their databases to it. (IIRC the typical mechanism was to have the database on an NFS mount that was just a tad slow to come up, so that the mount point looked empty just before initdb was called, and then at some point initdb was overwriting existing catalog files.) So now we insist on a manual action for that.
Personally I'd make the mysql initscript act the same way for the same reason, if I thought I could get away with it. But I think mysql users are too used to "it just works!"
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645538.8/warc/CC-MAIN-20180318052202-20180318072202-00702.warc.gz
|
CC-MAIN-2018-13
| 2,646
| 17
|
http://svconline.com/avcontrol/crestron_green_light_power_pack_room_controller/
|
code
|
Crestron Green Light Power Pack Room Controller
Aug 19, 2013 12:19 PM
The Crestron Green Light Power Pack is a room controller designed to communicate with photocells, occupancy sensors, and control stations to automatically control lighting in any room. Designed to mount directly over a pair of adjacent 4in. square junction boxes, the Power Pack is easy to install. High- and low-voltage connections are made using the labeled color-coded flying-lead wires. Lights will turn off automatically when the room is vacated, and rooms with adequate daylight will dim automatically. Power monitoring tracks the energy usage of each Power Pack, delivering statistics to help control energy costs. Users can control the room’s lighting with a discrete wall keypad or use the IR remote. While the Power Pack is a great single-room solution, it is designed to be part of a larger Crestron integrated building system, linked via wired or wireless communication to the central control system.
Acceptable Use Policy blog comments powered by Disqus
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122122092.80/warc/CC-MAIN-20150124175522-00227-ip-10-180-212-252.ec2.internal.warc.gz
|
CC-MAIN-2015-06
| 1,038
| 4
|
https://forums.overclockersclub.com/user/64408-pirateshipwrecked/?tab=posts
|
code
|
Sweet. I dont know my chipset, how do i find that?
sorry for so many questions, but also; where do i get sound drivers and whats a good one?
pirateshipwreckedMember Since 10 Oct 2008
Offline Last Active Jan 03 2011 05:50 PM
- Group Members
- Active Posts 4
- Profile Views 1925
- Member Title New Member
- Age Age Unknown
- Birthday Birthday Unknown
Unfinished, so im using not mine. But not a super computer...a super saiyan computer!!!
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912207618.95/warc/CC-MAIN-20190327020750-20190327042750-00292.warc.gz
|
CC-MAIN-2019-13
| 437
| 11
|
http://vw.fed.wiki.org/true-names.html
|
code
|
Vernor Vinge publishes True Names 1981
See also link
True Names was the science fiction novella which brought Vernor Vinge to prominence in 1981. It was one of the earliest stories to present a fully fleshed-out concept of cyberspace, which would later be central to stories in the cyberpunk genre. Because of this, it is often referenced as a seminal work of the genre. The story also contains elements of transhumanism, anarchism, and even hints about The Singularity.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578586680.51/warc/CC-MAIN-20190423035013-20190423061013-00556.warc.gz
|
CC-MAIN-2019-18
| 470
| 3
|
https://www.7qasearch.com/does-microsoft-have-a-help-desk
|
code
|
Does Microsoft Have a Help Desk?
Do you want to know whether Microsoft has a help desk? Do you want to get customer support from Microsoft? Are you facing the issue with Hotmail services and want to get customer support or technical support from Microsoft? If the answer to all the above-asked questions is yes, then here is the solution for you. If you are facing an issue with the Microsoft services, especially in Hotmail. Then you will be happy to know that Microsoft provides various modes that can help customers to get customer support from Microsoft.
Many times when Microsoft users face the issue in the services provided by Microsoft, then they want to get help from the help desk of Microsoft. Microsoft provides the option of Hotmail technical support through a help desk that will facilitate the users in the issues face by them, and help desk assistant will provide the best possible solution to the problem of the user. The problem could be related to Hotmail technical issue or any other issue, whatever the issue that the user is facing. But the users don't know the procedure to get the help desk from Microsoft, so to facilitate this need of the user, below are some steps that will help the users to get the help desk from the Hotmail.
Steps to reach the help desk from Microsoft
- Firstly the users are requiring opening any web browser and search for Microsoft.
- Then the user requires opening the official website of Microsoft.
- Now the user needs to move to the help and support option present at the bottom of the current web page.
- Then the user requires tapping on help and supporting the option to get customer support from the Microsoft team.
- Now the users will see the contact us, search for the issue and get help option on the current web page.
- Then the users can choose any method to resolve their issue, but for the help desk, they are required to tap on the get help desk option to get the help desk from the Microsoft team.
- Now the help desk panel will appear on the screen, and the user needs to type for the issue that they are facing with Microsoft.
- Lastly, the help desk assistant will help the users in providing the best possible solution to the user's problem.
Above mentioned steps will help the users to get the help desk from Microsoft. Whenever the users face any issue, they can get help from Hotmail customer service through the help desk from Microsoft through the help desk option. Below are some of the benefits of a help desk to the users.
Benefits of help desk to the users
- Receive the direct help- if the user’s use the help desk option, then they are allowed to get direct help from the Microsoft customer care team related to the issue that the user is facing.
- Gain the expert help- whenever the user faces the problem in the services provided by Microsoft, then through the help desk, the users can get the expert assistance and resolve their issue in the best possible ways.
- Obtain the solution of premium plans and related issues – if the users want to purchase the premium plan of Microsoft or want to get a refund from Microsoft for the premium plan, then they can get the solution of the problem through the Microsoft help desk.
- Get the relevant recovery information- if the user is facing the issue in recovering their Hotmail account password, then the users can directly get help or assistance through the help desk option.
- Enjoy the 24*7 assistance- while using the help desk, the users can get 24*7 assistance from the Microsoft team in resolving the issue that the user is facing with Microsoft.
Above mentioned are some of the benefits of getting assistance through the help desk to the users. If the user is still facing any issue, then they are advised to go for dialing the Microsoft phone number and talk to the customer care of Microsoft about the issue that they are facing with the Microsoft services.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100603.33/warc/CC-MAIN-20231206194439-20231206224439-00015.warc.gz
|
CC-MAIN-2023-50
| 3,903
| 20
|
http://www.babynamegenie.com/community/index.php?showtopic=26618?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024
|
code
|
need some third person advice
Posted 15 November 2013 - 10:37 PM
Raya Lynn Maruska
Jason Isaac maruska
mason Christopher maruska
any advice or comments would be great i just wanna get a really feel on theses names because they are seeming to stay in my head lately
Posted 16 November 2013 - 08:55 AM
Mason and Jason rhyme so those would be a no from me.
Posted 16 November 2013 - 09:35 AM
Posted 16 November 2013 - 03:23 PM
I agree what Mason and Jason sound really similar, but they're nice names regardless.
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863259.12/warc/CC-MAIN-20180619232009-20180620012009-00277.warc.gz
|
CC-MAIN-2018-26
| 581
| 13
|
http://www.blog44.ca/alfieh/2021/02/20/spoiler-alert-im-british/
|
code
|
Hello again! (probably) , this blog post is for The 4th blogging challenge ”Fun With Photos”. In this blog we have to draw out something that represents a certain aspect of my worldview. The world view aspect I chose is geography. How does where I live and where I was born effect how I see the world? This is what I drew:
I drew this using SketchesPro and combining multiple layers to stack the flags on top of each other.
The reason I chose to make this is to show as you can see from the title, im British. Im from the cold, rainy land of England. Thats why the Canadian & British flag are combined to show im from a country I no longer live in. Me being British has given me a lot of cultural differences from being in Canada, Thanksgiving being way more prominent, certain hand gestures mean different things.
The ending point of this post is what I want to tell you why im good at learning these differences has helped me be more understanding in disagreements, because I was also really confused by differences and misunderstandings. Thanks for reading my blog, and maybe you can tell me what heritage you have and how it effects you?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571989.67/warc/CC-MAIN-20220813232744-20220814022744-00244.warc.gz
|
CC-MAIN-2022-33
| 1,145
| 4
|
https://curezone.com/forums/fm.asp?i=832318
|
code
|
I just saw your post below where you say you have also been ill - sorry man, I giess I didn't check down far enough and pretty much shot from the hip - my profuse apologies.
Just had a really bad week :(, I'm really sorry if I was out of line before. Had the hospital on Monday where the doctor told me he didnt really think candida could cause a problem. I had to physically restrain myself from leaping over the desk and throttling him. It feels like some friends don't think it's a problem either and are actually suggesting I *enjoy* being ill - AAAAAAAAAAAAAARGGGGGGGGGGGGGGHHHHHHH!
Hey, your humour is genuinely appreciated, thank you - if you can raise a laugh then it's much appreciated and we thank you wholeheartedly!
Good luck to yo u, we will be rooting for you along every other soul on here ;)
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510462.75/warc/CC-MAIN-20230928230810-20230929020810-00225.warc.gz
|
CC-MAIN-2023-40
| 807
| 4
|
https://forum.arduino.cc/t/arduino-diecimila-not-loading-sketch/25457
|
code
|
My arduino isnt uploading sketches, though the power light is on. The Rx an Tx dont light up at all and I get this error:
avrdude: stk500_getsync() : not in sync: resp=0x00
avrdude: stk500_disable() : protocal error, expect=0x14, resp=0x51
Can someone please tell me what this means??!!
It means that something is not working with the link between the arduino and the computer. It could be a lot of things.
How are you sending to the arduino? Usb? FTDI cable?
Is anything else hooked up to digital pins 0 and 1?
Do you have to push the reset button on your type of arduino like board in order to get it to accept a new sketch? ( you do have to do that on some models)
Are you sure you are sending to the right serial port?
Is there something wrong with your cable?
You see.... lots of questions. You need to be extremely specific about your setup and what you've tried in order for people to help you narrow this down. This can be a really annoying problem.
Sorry I will try to be more specific.
It started when I plugged my arduino into a 9 volt wall wart. the black chip between the power jack and the ext / usb headers heated up so I unplugged it. I tried using the usb cable but when I plugged it in the power light only lit up for a second, then went out. I let it cool down for half an hour and when I plugged it in the power worked fine, but it wouldnt upload a schetch, it would just say the errors from the last post. I tried switching serial ports, switching usb plugs, resetting it, using external power, nothing seems to work. Sometimes when I try to upload a schetch the Rx light flickers, but only a little bit, but I tried it recently and I dont get anything. The computer seems to still recognize the arduino when I plug it in, because it makes that sound when you plug something into usb, so there is some kind of connection.
Hope this is enough detail for you to help me!!
Thanks a lot
Well, unfortunately it seems like you've probably smoked the board somewhere. The computer will register a connection if the FTDI chip on the arduino is still working (at least for the most part.)
The black chip you are referring to would probably be the 7805 voltage regulator and you board would probably be the diecimila board. That chip is supposed to get somewhat warm but if it got really warm then it means that there was a large current draw on the board. This could be due to a short circuit somewhere or maybe something you had plugged into the board.
A number of bad things might have occurred. If the atmel 168 chip got too low a voltage then it's flash could have gotten corrupted. If the bootloader is corrupted then you won't be able to use the USB interface to upload sketches. You can use a parallel port programmer to do it. Chances are, you don't have one of those. You also could have some other component which is bad.
Depending on how seriously you are into this (as a hobby, a job, a very serious hobby, etc) you really should invest in spare parts. Get some 7805's, some spare caps, resistors. You really also ought to order up more atmel 168 processors. It's best for the hobbyist to order chips already programmed with the bootloader so that you don't have to get special stuff to do it yourself. Though, a programmer can be used to reload the bootloader onto chips you've gotten corrupted so if you are dealing with a number of arduinos then it might be worth it.
In short, I think that you are going to have to assume that your board is damaged in some way.
Okay, I'm getting basically the same error, and I've come to the same conclusion, i.e. something is damaged. The thing is nothing visually looks bad, the ps led comes on, and the pin 13 led flicks on reset, flicks multiple times when I try to upload (which fails with the out of sync error), and even flicks when I pulse the dtr line on the serial connection.
If the parallel port programmer doesn't fix it, then what else could be wrong? Some weird damage with the atmega? In my experience ICs usually completely go when they do, never partially. If so can I just buy blank atmegas, pop 'em in the arduino and use my parallel programmer to flash the bootloader?
I've got a similar problem that I can't upload sketches to my Arduino. I have a feeling that the USB chip might be fried but I don't know for sure. Is there any way of troubleshooting that chip?
The sad truth in hardware design is no.
The hardware way of troubleshooting is:
Check that all DC power supplies are within limits.
Guess what is not working, you base your guesses on different measurements on the board, if the chip is discolored it is a quite easy guess.
Swap the chip and hope your design starts again.
If it does not start, make a new guess and change another chip.
You also have problems with bad soldering joints, conductors broken inside the board etc
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474676.26/warc/CC-MAIN-20240227121318-20240227151318-00169.warc.gz
|
CC-MAIN-2024-10
| 4,822
| 30
|
https://www.mail-archive.com/zones-discuss@opensolaris.org/msg00421.html
|
code
|
On Mon, Aug 21, 2006 at 01:03:40PM +0100, Gary Pennington wrote:
> On Mon, Aug 21, 2006 at 02:13:51AM -0700, UNIX admin wrote:
> Zone packages end up needing other packages, which end up needing the
> SUNWj5rt needs a whole bunch of X-windows packages, such as SUNWxwplt and the
It is fixed in Nevada.
Take a look at:
There doesn't seem to be any description of *how* this was fixed.
I'd assumed 'dynamic resource pools' no longer depended on java
(wrongly, judging by SXCR b46).
Rasputin :: Jack of All Trades - Master of Nuns
zones-discuss mailing list
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827992.73/warc/CC-MAIN-20181216191351-20181216213351-00032.warc.gz
|
CC-MAIN-2018-51
| 554
| 11
|
https://www.business-software.com/product/kony-development-cloud/
|
code
|
Kony Development Cloud is available as a SaaS solution that requires no additional hardware.
Kony works with multinational, enterprise and mid-sized customers.
SGN, Warburtons, Engie, Nationwide, ComEd
Kony Development Cloud is an application development software solution that builds hybrid, native and web apps for mobile devices. Throughout the app design process, users can create UIs with a WYSIWYG editor, connect cloud services to different app pieces and work in code.
As app builds are finished, the program can test and debug different code segments and receive comments from testers. Finished apps can be deployed to over 10,000 devices with a single click, and the system ensures that all cloud services are working at optimal capacity. Lastly, Kony Development Cloud can monitor app users by generating analytics on performance and trends.
Kony was founded in 2007 and is headquartered in Orlando. 70 of the Fortune 500 companies use Kony products.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811794.67/warc/CC-MAIN-20180218062032-20180218082032-00475.warc.gz
|
CC-MAIN-2018-09
| 961
| 6
|
https://forum.onefourthlabs.com/t/unable-to-connect-to-local-runtime-google-colab/6609
|
code
|
I am not able to local run time. I am using Windows 10 OS & i executed commands on Aaconda PowerShell Prompt. The below two commands got executed successfully. pip install jupyter_http_over_ws jupyter serverextension enable --py jupyter_http_over_ws While running the below commands . i am getting an error. please help me to resolve that. **jupyter notebook \** ** --NotebookApp.allow_origin='https://colab.research.google.com' \** ** --port=8888 \** ** --NotebookApp.port_retries=0** Attached a screeshot of error
Try writing it in a single line like this:
jupyter notebook --NotebookApp.allow_origin=’https://colab.research.google.com' --port=8888 --NotebookApp.port_retries=0
yeah issue resolved. But when i tried to import torch on Google Colab after connected to local run time, it is showing an error ‘No module named torch’. how to resolve that?
Please refer the following thread:
How to locally install the torch module
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704800238.80/warc/CC-MAIN-20210126135838-20210126165838-00715.warc.gz
|
CC-MAIN-2021-04
| 934
| 6
|
https://community.teradata.com/t5/General/Need-help-creating-a-Stored-Procedure-for-creating-tables/td-p/72757
|
code
|
I am fairly new with SP development in TD and I need help or at least a point in the right direction on how to create a stored procedure, which will loop through a SQL query say "select requesttext from DB.TEMPTABLEDDLLIST" and from that list of table DDLs execute and create the tables within that loop. From what I know the way to achieve this is through a cursor and loop through the results, but if there is a better way to do it I am ok also with this approach. Appreciate the help.
Check the dynamic SQL option in stored proceedures - you need a specific exec right for this.
might give you some hints...
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584203540.82/warc/CC-MAIN-20190123064911-20190123090911-00333.warc.gz
|
CC-MAIN-2019-04
| 610
| 3
|
https://teeltechcanada.com/video-forensics-tools/vis-video-identification-system/
|
code
|
VIS - Video Identification System
How Does VIS – Video Identification System Work?
VIS provides investigators with the ability to extract and compare video feature points so that even if the image is distorted or modified when compared against the original source the original content is easily identified.
One of the most important features of VIS is video DNA extraction. The video or image DNA is a set of specific characteristics extracted from the source material, which is then stored and managed in a database. By comparing the HASH value and DNA data in the database, VIS then determines whether it is the same video based on the percentage matched data.
During the DNA extraction and comparison process, investigators and detectives can significantly reduce their workload and time spent comparing and classifying videos, both manually and visually.
VIS can tackle challenges such as:
- Edited playing time
- Color alterations
- Reversed imaging
- Distorted imaging
VIS helps video publishers:
- Minimize the illegal distribution of copyrighted content,
- Establish a sound distribution system ensuring the legitimate collection of royalties,
- Protect copyright material and secure profits
VDE is designed to function fast in low specifications (PC specification, low network bandwidth), and VIS uses an optimized DNA search algorithm to produce results from hundreds of thousands of hours of video DNA within seconds. *May vary slightly depending on server specification.
VIS detects the manipulation in videos (black-and-in-back, inversion, speed control, compression, subtitles, watermark, etc.) with a minimum accuracy of 99%.
VIS is able to registers and categorizes hundreds of thousands of hours of videos, and filters tens of thousands of hours of video per day thanks to database structures specifical structured to processing large amounts of DNA data.
It is not allowed to track back video content from DNA information extracted from the video, and only the DNA information is stored on the server.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819668.74/warc/CC-MAIN-20240424143432-20240424173432-00263.warc.gz
|
CC-MAIN-2024-18
| 2,021
| 18
|
https://www.refinitiv.com/perspectives/future-of-investing-trading/trading-technologies-for-todays-workflows/
|
code
|
Traders are dealing with more data than ever before in their pursuit of alpha. Through the use of AI, machine learning and natural language processing, how are advances in trading technologies from Refinitiv helping to ease the burden?
- Evolving trading technologies reduce the burden of activities such as data retrieval, meaning buy-side and sell-side traders focus more on what the information is telling them.
- AI, machine learning and natural language processing create simplified workflows which enable quicker and better decision-making.
- Advances in trading technologies from Refinitiv are helping to bring alpha-generating ideas to end users in an actionable format.
Ever since shares were exchanged at London coffee houses or under a Buttonwood tree in downtown Manhattan, trading has been evolving.
These advances have included physical or auction-style trading, electronic posting of bids and offers, and the decimalization of stocks, all the way through to today’s market electronification.
Adapting to these changes means buy-side traders are having to wear many hats and look across multiple asset classes and at more data. For the sell-side it means covering more accounts.
So, how is it possible to handle more data and look across more asset classes — all while providing more alpha for their portfolio manager, client, or firm?
The answer is through advances in trading technologies.
Advances in trading technologies
Traders are very good at sourcing and filtering out relevant information, but in the end, they are humans, and their capacity only goes so far.
At Refinitiv, we feel that trading technologies should play a critical role in enhancing human capabilities, and act as an extension of the trader.
We believe that technology has evolved to the point where it can lift the burden off the trader for activities such as data retrieval and help them focus more on the understanding of what the information is trying to tell them.
Watch: Phil DeFrancesco discusses data’s impact on trading (Part 1)
Using AI, machine learning and natural language processing, we are creating simplified workflows with the intention of increasing efficiency of that end user.
Alpha generating ideas
Humans and computers become much more powerful when they work together, so our focus is on helping people make quicker, better decisions rather than simulating human consciousness.
Our Smart Desktop solutions bring alpha generating ideas to our end users in an actionable format.
Being notified of outsized price movements, understanding why that stock is moving, and then knowing who to tell and what to do with this information is a great example of how technology can play a critical part in the investment process.
Beyond the trading desktop, we are also making these solutions available as a microservice or API. This will allow users to pull data into their own infrastructure, giving them even more flexibility over how and where they want to capture this information.
Watch: Phil DeFrancesco discusses data’s impact on trading (Part 2)
At Refinitiv, we are striving to be at the forefront of these type of solutions, especially as we leverage our unparalleled breadth of news and data that cannot be matched by any of our competitors.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511021.4/warc/CC-MAIN-20231002200740-20231002230740-00334.warc.gz
|
CC-MAIN-2023-40
| 3,259
| 22
|
http://dynserv.net/2022/09/23/lenovo-p50-not-starting-up/
|
code
|
I started having an issue this morning with my lenovo p50. when i try to turn it on nothing happens. with the laptop plugged in the power lights on the back are not even on.
i tried removing the battery and the charger and held down the power for 30 secs then plugged in the charger and tried turning on. after doing this laptop tyrns on briefly i see logo and then it turns off. after trying to turn it on a few more times it eventually just gets to a point where only lights on the keyboard turn on briefly there is a whir of the fan for about a second and then immediately shuts off.
what do you guys think the problem might be?
Maximum memory for Lenovo Ideapad 320-17IKB model 80 XM question
I ran the Lenovo Solution Center program on my Lenovo Ideapad 320-17IKB model 80 XM and it informs me of the following:
Total memory = 8 GB
Maximum Supported Memory = 32 GB
Now, that’s by Lenovo so I trust it. I recognize that there are 4 GB of memory soldered in and there is a card with 4 GB of "replaceable" DDR4 memory installed.
My question is whether I can install a 32 GB "replaceable" type DDR4 RAM card within? OR whether that exceeds the "maximum supported memory" specification? I do not know of amy 28 GB DDR4 RAM cars so I’m a bit confused?
I guess it boils down to what "Maximum Supported Memory" means? Does it mean maximum supported "replaceable" memory?
PS-See image attached for Lenovo Solution Center info.
Please request to update the application privacy
Lenovo y700 – 15ISK black screen
I was going on a train, so i turned off my laptop to get it in my backpack. When i tried to boot it in the train, the screen was not turning on. It stayed like that. I’ve tried many things (battery drain, bios update and else) but it just doesn’t work. I can plug in an external monitor an it is just fine. I’m just reminding you that the screen is OFF, it doesn’t show anything through the bootup and in the windows. The external monitor is considered as a main one, and i can go to bios through it. I really don’t know what happened, and i couldn’t find any way to fix it in the internet. So i ask, do you know the problem ? If yes, then can i do something besides just sending it to service ? It looks like my warranty worn out january this year.
Yoga 510-14ISK don't power up at all
I have a problem since yesterday with my Yoga 510-14ISK laptop : it does not power up at all.
The power LED turns on for a few seconds when I press the power button, but it turns itsef off and I don’t see anything on screen.
I tried to remove the RAM and put it back, I’ve changed it to the original one but that’s not better.
This is my main laptop, the one I’m using to work and it’s running Linux Mint 18.3.
Thanks for your help.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335514.65/warc/CC-MAIN-20221001003954-20221001033954-00614.warc.gz
|
CC-MAIN-2022-40
| 2,752
| 20
|
https://www.kashipara.com/project/idea/java/account-management-system_365.html
|
code
|
Account Management system project on java
Account Management system project features and function requirement. Share java Project ideas and topics with us. grate and many java project ideas and topics. here some java project ideas for research paper. here large collection of java project with source code and database. we many idea to development application like mobile application,desktop software application,web application development. you can find more project topics and ideas on java. development ideas on Account Management system. many project available to download with java source code and database. free download Account Management system project synopsis available.
free download Account Management system mini and major java project source code. download simple learning java project source code with diagram and documentations.
other project submit by Thirumalai
Account Management system project abstract
|Project Name||Account Management system|
Account Management system project description
As per client requirement i want to design the account software. I heard about tally accounting software. I dont know what is ledger ,etc.... Can any one guide me to develop or give me idea about accounting software...
Latest java Project Source Code
This is a Quick Sort program but this is not a simple sorting program but it generated random numbers and it sorts it .... view more
This is a simple program of Bucket Sort . Bucket Sort is a simple sorting method in ADA . This is a random number generation with bucket sorting program that gives brief idea about how randomly numbers are generated and sorting is done.... view more
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541883.3/warc/CC-MAIN-20161202170901-00509-ip-10-31-129-80.ec2.internal.warc.gz
|
CC-MAIN-2016-50
| 1,644
| 11
|
https://zebrahub.ds.czbiohub.org/imaging
|
code
|
light-sheet imaging, photo-conversion, cell lineage reconstruction
Recent technological advancements in light-sheet microscopy have opened up exciting new opportunities for in toto imaging of whole developmental arcs at the cellular level. With the help of advanced fluorescent proteins, light-sheet microscopy enables the reconstruction of the entirety of embryonic development at a cellular level, allowing for image-based lineage reconstruction.
These two datasets were acquired on our OpenSiMView light-sheet microscope. You can explore the full timelapse image data here:
These two datasets were acquired on our DaXi light-sheet microscope. You can explore the full timelapse image data here:
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474775.80/warc/CC-MAIN-20240229003536-20240229033536-00267.warc.gz
|
CC-MAIN-2024-10
| 697
| 4
|
https://msdn.microsoft.com/en-us/library/Vstudio/system.xml.xmlreader.valuetype
|
code
|
Gets The Common Language Runtime (CLR) type for the current node.
Assemblies: System.Xml (in System.Xml.dll)
System.Xml.ReaderWriter (in System.Xml.ReaderWriter.dll)
See Type Support in the System.Xml Classes for a list of the default mappings.
An element of type xs:int has a ValueType of System.Int32 by default. However, the ValueType could be one of the valid types that can be mapped to xs:int, such as System.Int16 or System.Double.
If a node is un-typed, or if the node is an element that contains mixed content, the node value is mapped to the System.String type.Notes to Implementers
Implementers must provide a ValueType for every node, even if it is only the System.String type.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068184.11/warc/CC-MAIN-20150827025428-00060-ip-10-171-96-226.ec2.internal.warc.gz
|
CC-MAIN-2015-35
| 689
| 7
|
http://dipyoutube.com/a-few-risks-to-keep-in-mind-while-choosing-web-developers/
|
code
|
A few risks to keep in mind while choosing web developers
There Is now an increasing requirement for web site designers and web developers as the variety of websites keep increasing day by day. Even as we read this write-up, it’s likely that there could be a few dozen new web sites getting hosted over the world-wide web. In situations like this, choosing the very best web designer Sheffield skilled could be a challenging jobs. In the event you have a look around your local area and surf the internet, it’s quite obvious that you should encounter lots of web designer Sheffield providers. Each of them will look the exact same and this can chemical your distress in more ways than you personally. Thus, you ought to know of the couple guidelines that might help in picking the most suitable web developer Sheffield whenever you have many choices to have a look at.
Consistently Start Looking For Encounter & Know-how
Becoming A proficient and renowned web programmer cannot happen overnight. It requires quite a little bit of time, patience, knowledge, wisdom and skill gathering along with other things. Thus, it would be advisable to look for anyone developers and designers who are able to reveal at least five to five decades of practical experience. Experience are not only going to assist the website programmers to hone and build their abilities but in addition will be able to know the client needs out of various viewpoints and scenarios.
Very good team of Programmers
You Also need to be sure that the net designers and programmers possess the ideal group of programmers and programmers. They are the backbone for developing the most basic skeleton or the platform on the website design, along with different such attributes will probably sit. They must be familiar with modern-day on-line programming and tools.
Ability to Give Punctually
The Next important point would be to at all times start looking for net programmers who really have a good track Listing of bringing their own endeavors on time. Time overruns are common and if Time schedules have been missed, costs additionally increase for the customers.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358798.23/warc/CC-MAIN-20210227084805-20210227114805-00214.warc.gz
|
CC-MAIN-2021-10
| 2,131
| 8
|
https://brotherprintersupportnumber.com/install-brother-printer-drivers-on-mac/
|
code
|
Install Brother Printer Drivers on Mac by Brother Printer Support Number +1-855-267-5995
Printer drivers are one of the most requisite components for printers. It makes your printer communicate better with the hardware devices connected to it. If you are having hard times to Install Brother Printer Drivers on Mac, the illustrious guidance of the blog will help you out. You can also avail help from Brother Printer Tech Support services +1-855-267-5995 for better results.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.95/warc/CC-MAIN-20220817032054-20220817062054-00588.warc.gz
|
CC-MAIN-2022-33
| 474
| 2
|
https://www.onenaught.com/posts/54/php-53-gets-a-bit-more-object-oriented-and-more
|
code
|
PHP 5 in general has been a good improvement over PHP 4, but those used to full blown object oriented program languages such as Java or C# may find some OO features still lacking in PHP 5.
PHP 5 has the usual things, such as classes, interfaces, abstract classes, inheritance, etc, but some useful programming constructs have been missing, though PHP 6, under development, aims to rectify that.
However, it seems that many of those features are going to be brought forward to the up-coming PHP 5.3 (which may make it more likely that it will get installed by web hosting companies sooner than they would likely go for PHP 6).
Sitepoint has an excellent summary of the features. The list of features include:
usekeyword to use namespaces!
- Namespace aliases
- Class constants
- Namespaced functions
- The Global namespace
- Autoloading namespaced classes
- Late static binding
- Variable static calls
- MySQL native driver
- Additional OpenSSL functions
- Improved command line parameter support
- XSLT profiling
- New Error Levels
- and more…
But visit the sitepoint article for more details.
If you want, there are PHP snapshot builds of PHP 5.3 (and 6.0).
(Those of you that have never coded in PHP may be surprised to see some of these features only surface now, yet, PHP has still managed to drive some of the biggest web sites out there…! I have only used it for about a year myself, in spare time, but it seems quite powerful even without these things. Can’t knock it too much then, I guess 🙂 )
The native MySQL driver will be a good boost for performance, and I will be looking forward to the XSLT profiling in code, while namespaces will be an obvious winner too.
But a lot of PHP programmers are often using it for small scripts and may not often need or want the full power of an OO language. Personally, I prefer OO, so am looking forward to this. Do you use PHP? If so, do you prefer the OO approach and like what is coming?
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473871.23/warc/CC-MAIN-20240222225655-20240223015655-00004.warc.gz
|
CC-MAIN-2024-10
| 1,946
| 23
|
https://geeks.online/how-to-reset-ntfs-permissions-in-windows-10/
|
code
|
NTFS File and folder permissions are really important to Windows. Sometimes these permissions can be altered causing software or even the operating system to work in an undesired way.
The ICACLS command is designed to help you clean this mess up.
Launch the command prompt as an Administrator and navigate through the tree of folders you need to fix.
Then launch the command:
ICACLS * /T /Q /C /RESET
ICACLS will reset ntfs permissions of all the folders, files and subfolders. After a while, depending on the number of files, the permissions will be fixed.
Sometimes, you may need to take the ownership of a tree of folders before you can reset the ntfs permissions. You can use the command below before launching the ICACLS.
takeown /R /F *
Be careful, taking the ownership of system folders may break your operating systems.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102469.83/warc/CC-MAIN-20231210123756-20231210153756-00563.warc.gz
|
CC-MAIN-2023-50
| 827
| 9
|
https://wikiazure.com/events/teams-day-online-2020/
|
code
|
Teams Day Online is a virtual community event that brings the Microsoft Teams community together to freely share knowledge to build the Global Modern Workplace and Microsoft Teams communities.
Special thanks to Russ Basiura and the entire Team for your great effort putting this event together!
Whether you are rookie or a master on Microsoft Teams, there are sessions to fit your organization’s needs and will provide the information and training that your team needs to govern, operationalization, and adopt Microsoft Teams to solve business problems.
Some of the event’s key topics included:
- How to Use Microsoft Teams as a Platform for Field and First-Line Workers
- Learn How to Make Microsoft Teams Fit Your Work Style
- Navigate How Microsoft Teams can be used to Streamline Communications
- How IT and First-Line Workers can find a Common Language to Tackle User Adoption Together
- Build a Successful Onboarding Process with out-of-the-box Office 365 Tools
- How to Externally Share Microsoft Teams Efficiently with Your Organization’s Partners
- Create Chat Bots to Assist Your Organization with Microsoft Teams
- Discover which Microsoft Teams Apps are Best for Your Organization’s Workstream
- And more!
🌐 Teams Day Online website: https://www.teamsdayonline.com/
I had the chance to be part of this Teams Day Online and deliver a session on how to leverage Azure and MS Teams to run a POC in times of covid.
On this session I shared a real-world experience of how I leveraged Azure and MS Teams to run a complex PoC with remote teams across the U.S., Mexico, and Brazil along some best practices and apps I use in MS Teams and Azure Bastion.
As per the organizers,
Teams Day Online was a huge success. We trained nearly 3000 people across two days and shared our love and passion for Microsoft Teams with many newcomers. This was a gargantuan undertaking that was massively successful.
Teams Day Online III will happen on April 7 and April 8, 2021.
Looking forward to a great event!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00634.warc.gz
|
CC-MAIN-2021-43
| 2,008
| 20
|
https://github.com/peterblazejewicz
|
code
|
Create your own GitHub profile
Sign up for your own profile on GitHub, the best place to host code, manage projects, and build software alongside 28 million developers.Sign up
1,667 contributions in the last year
Created a pull request in aspnet/Docs that received 3 comments
This commit adds information about 4.1.1 being added to the CDN. The original tip-off: aspnet/Templating#595 (comment) Thanks!
Created an issue in aspnet/Home that received 3 comments
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589634.23/warc/CC-MAIN-20180717090227-20180717110227-00611.warc.gz
|
CC-MAIN-2018-30
| 459
| 6
|
http://www.chegg.com/homework-help/computer-networking-a-top-down-approach-5th-edition-chapter-2-problem-22p-solution-9780136079675
|
code
|
Consider the following scenario:
Size of the file to be distributed (F) = 15 Gbits
Server upload rate (us) = 30 Mbps
Download rate of each peer (dmin or di) = 2Mbps
Now calculate the minimum distribution time for client-server and P2P (Peer-to-peer) distribution for the combinations of N values 10, 100, and 1000 and u values 300 Kbps, 700 Kbps and 2 Mbps.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398466178.24/warc/CC-MAIN-20151124205426-00261-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 357
| 5
|
https://addons.mozilla.org/EN-US/thunderbird/collections/-Ken-Saunders-/thunderbird/?page=3
|
code
|
Thunderbird Add-ons? Here's Some Great Ones!
by Ken Saunders
About this Collection
Thunderbird add-ons that I've either used for years, or tested and highly suggest.
55 Add-ons in this Collection
The main goal allows users to open (and in some cases edit) page source with external applications.
Allows for customization of the folder pane. Accounts can be rearranged and the startup folder can be chosen.
Displays the address books in a sidebar in Thunderbirds 3-pane-window and can be toggled with the F4 key or a toolbar button.
Visit my Contacts Sidebar site for recent versions and news.
Maximize message pane by collapsing the thread pane. You can read a message in larger area without opening a new message window...
Adds a summary of mail accounts and folders to replace the account central pane and the empty folder preview
This add allow to customize the snooze delay when snooze popup appears (lightning required).
Allows other extensions to create new account types. This extension (sometimes called SkinkGlue) should not normally be downloaded directly by users, but instead is included in bundles with other extensions, such as the extension TweeQuilla.
Sort and search for buttons in (any) toolbar customization dialog
Adds a new action to mail filters to allow playing a particular sound when the filter matches. Also includes several sample sound files.
Zoom in, zoom out, and reset zoom with one toolbar button. Works in the Compose window too.
Protects you from phishing attacks by verifying the From: address domain name of emails as you read them. Sender Policy Framework (SPF) and DNS-based reputation lists are used.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824618.72/warc/CC-MAIN-20171021062002-20171021082002-00718.warc.gz
|
CC-MAIN-2017-43
| 1,639
| 17
|
https://answers.microsoft.com/en-us/windows/forum/windows_8-hardware/bluetooth-broadcom-driver-in-windows-7-hp-is-not/1afe0f5e-832e-48c7-bec2-4a80ee6fb0e7
|
code
|
Bluetooth broadcom driver in windows 7 HP, is not compatible with Windows 8. How could i fix this ?
I have an HP Pavilion M6-1002Tx notebook with Windows 7 Home premium. I want to be upgrade with Windows 8 pro and the upgrade assistant says, Bluetooth broadcom software is not compatible with windows 8. Please help me, Use the Bluetooth in Windows 8 pro
This thread is locked. You can follow the question or vote as helpful, but you cannot reply to this thread.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358956.39/warc/CC-MAIN-20210227114444-20210227144444-00086.warc.gz
|
CC-MAIN-2021-10
| 462
| 3
|
https://help.openthc.com/kb/lot/split
|
code
|
Split / Sub-Lot Inventory into New Lots
How to Sub-Lot or Split weight from one Lot into brand new Lots
- From the Dashboard select Inventory.
- Use filters on Inventory such as Lab Result Tested, Product, Variety, and Section to narrow down the active inventory.
- Select the Lot ID for the concentrate Lot that will be used to be convert into Concentrate for Inhalation.
- Select Split.
- Enter a value in the Amount that you wish to split off from the main Lot.
- Select a desired Section to move this new Lot to.
- Select the blue + button to create another Split from the main Lot.
- Select the red Bin button to remove a potential Split.
- Select Create Sub Lots to Split the inventory.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649302.35/warc/CC-MAIN-20230603165228-20230603195228-00600.warc.gz
|
CC-MAIN-2023-23
| 692
| 11
|
http://www.beholdgenealogy.com/blog/?p=619
|
code
|
Earlier this week, I finished and put up two of the last three webpages for this new site. Those were the Feedback and Buy Now pages. All that was left now was the Download page. That was going to be the trickiest because it involves a few web scripts and programs to allow calculating and emailing the trial key.
But what I discovered while I was trying to implement a check for valid email addresses was that IXWebhosting does not allow programs to be executed on their Windows servers. This means the PHP “exec” statement will not work, and compiled CGI programs (which I built using Delphi) in the cgi-bin directory will not be executed.
I was not pleased to find this out, especially when I was so near to having this all finished. So the key generation could no longer be done with my pre-compiled program that used the same code that Behold uses. I liked doing that because it guaranteed that the keys would always be generated the same way.
Other than finding a new host, which I’m not going to do right now, I’m going to have to rewrite all that code in PHP and very thoroughly check all of it to ensure it works exactly as it does in Behold. It’s unfortunate that I have to go through this work, but looking on the bright side, I can see some potential advantages of having it in PHP rather than in an executable, and I’m hoping I can have it finished within a week.
My mailing list, which was a ASP and Microsoft Access based tool on lkessler.com did manage to copy over and work right away on IXWebhosting. That is probably the only part of this web move that has. But it is the last remant of ASP and Access that I am using, and it may prove advantageous to convert that over to PHP and mySQL as well. Then everything will be integrated and accessible from the PHP scripts which may be better in the long term. Not only that, but I will no longer be tied to needing a Windows Hosting Service if I decide or need to change again.
So just a little more pain, for potential long-term gain.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119642.3/warc/CC-MAIN-20170423031159-00313-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 2,011
| 6
|