url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
http://www.phoronix.com/scan.php?page=news_item&px=MTYzOTM
code
Crytek had just one Linux box at the show, which was running Ubuntu and showing the CRYENGINE running on Linux. As I wrote in the CRYENGINE article earlier, "Not many at GDC were excited about Linux support particularly. On the first day of the expo, only about six people had commented/recognized it was CRYENGINE on Linux." At Intel's booth I found no Linux systems at all, but all of their demos seemed to be Windows-based. At the AMD booth was one Linux box. The AMD Linux box was a slow A8 system showing that GPU PerfStudio can be run on Windows and connect to a Linux system for OpenGL debugging. GPU PerfStudio itself doesn't run on Linux for now but they were just showing that the AMD software can be used from a Windows system for debugging Linux OpenGL games/applications. Ubuntu was in use on the AMD A8 APU system. LunarG happened to be at the Game Developer's Conference but their booth wasn't manned the entire time and featured just a laptop for showing off their work on OpenGL tracing/re-playing. The most concentrated showing of non-Linux Android at GDC 2014 were the Steam Boxes running the Debian-based SteamOS. At booths like Unity, Epic Games, Occulus, Qualcomm, and elsewhere, I didn't see any pure Linux (non-Android) works on display. Of course, I was there for just a day and a half, so I might have missed another one or two Linux systems, but Linux game support was far from being heavily advertised at GDC 2014 or a popular topic. When asking random game companies and other stakeholders about Linux, many of them responded they were mostly focused on Windows games at this time, the Linux gaming market is small, or that they simply didn't have anything to show off that couldn't be done better under Windows. Stay tuned though for some more exciting GDC Linux exclusives in the coming days, aside from the articles already posted today covering Ubisoft on Linux, CRYENGINE Linux details, etc.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118740.31/warc/CC-MAIN-20170423031158-00414-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,925
6
https://it-ebooks.info/book/4636/
code
Eclipse 4 Plug-in Development by Example How to develop, build, test, package, and release Eclipse plug-ins with features for Eclipse 3.x and Eclipse 4.x As a highly extensible platform, Eclipse is used by everyone from independent software developers to NASA. Key to this is Eclipse's plug-in ecosystem, which allows applications to be developed in a modular architecture and extended through its use of plug-ins and features. Eclipse 4 Plug-in Development by Example Beginner's Guide takes the reader through the full journey of plug-in development, starting with an introduction to Eclipse plug-ins, continued through packaging and culminating in automated testing and deployment. The example code provides simple snippets which can be developed and extended to get you going quickly. Share Eclipse 4 Plug-in Development by Example
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474795.48/warc/CC-MAIN-20240229071243-20240229101243-00194.warc.gz
CC-MAIN-2024-10
834
5
http://forums.opensuse.org/showthread.php/463952-how-resize-lvm
code
Re: How to resize LVM? No it's not hard. djh-novell really summarized it in the 4 points he posted so You are just 4 steps away from what You want to do. If I were in your shoes I would do the backup and just try it. The worst thing You can encounter is recovery from backup which most likely what You will have to do anyway when reinstalling from scratch. But the thing You can gain is learning how to do that and it's so much fun to be able to resize file systems on a running system Once You get the hang of it it is actually quite easy and LVM has methods to back up some meta data or whatever that's called that in most cases lets You recover without backup if You screw something up. Originally Posted by glock356 I guess by choosing LVM set up during the installation. I never tried that but I guess it can guide You quite well and maybe propose some recommended set up. That's just a guess of course. Box: Windows 7 / Windows XP | Intel Dual-Core E5200 | ATI Radeon HD4850 | 4GB RAM Lap: openSUSE 13.1 / Windows 7 | Intel U7300 | KDE | Intel GMA 4500 | Asus UL80A | 3GB RAM
s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009179.34/warc/CC-MAIN-20141125155649-00137-ip-10-235-23-156.ec2.internal.warc.gz
CC-MAIN-2014-49
1,081
6
https://www.climatechangecommunication.org/countering-misinformation/
code
The 4-D Project Countering Misinformation Misinformation is a multi-faceted problem, influencing society at political, social, technological, and psychological levels About Our Program Effective responses to misinformation require multi-disciplinary approaches. The aim of the 4D Project, led by Dr. John Cook, is to develop the “holy grail of fact-checking”: systems that automatically detect and neutralize misinformation about important science-based topics and issues. This can only be achieved by synthesizing research findings from computer science, political science, philosophy, psychology, and communication. The 4D Project will synthesize four lines of research: Detection (automatically detecting online misinformation); Deconstruction (identifying the exact nature of the misinformation); Debunking (implementing proven refutation approaches); and Deployment (inoculating and debunking in a variety of social contexts). In collaboration with the University of Exeter and Trinity College Dublin, we have constructed a comprehensive taxonomy of climate misinformation arguments and are developing supervised machine learning methods (manually training a machine to detect textual and inferential patterns) to automatically detect and categorize misinformation about climate change. This research will develop and improve machine learning techniques to detect science misinformation. Automatically flagged misinformation needs to be assessed, using a combination of scientific fact-checking and critical thinking analysis, in order to identify the exact ways that it misleads. In collaboration with critical thinking philosophers from the University of Queensland, we have developed critical thinking methods to deconstruct and assess misinformation. We continue to apply these methods to different types of misinformation (such as inductive arguments) and explore communication techniques such as parallel argumentation to inoculate people against misleading fallacies. Debunking misinformation is notoriously difficult, given the psychological complexities in correcting misconceptions. So it’s imperative that refutations follow principles informed by experimental research. We have also published research into the efficacy of inoculation to neutralize misinformation, and continue to advance research into misinformation with a variety of experiments, testing different contexts, types of misinformation, and refutational formats and approaches. In order to achieve meaningful responses to misinformation, research into detection, deconstruction, and debunking of misinformation needs to be deployed at scale in real-world applications. We have published debunkings of climate misinformation through the Skeptical Science website, translated into 24 languages and receiving 3.7 million visitors per year. We developed the Massive Open Online Course, Making Sense of Climate Science Denial, which has received over 40,000 enrolments from 185 countries. In collaboration with the National Center for Science Education and Alliance for Climate Education, we developed high school curriculum that raises climate literacy and boosts critical thinking by countering climate misconceptions. We continue to develop and monitor the effectiveness of these efforts, as well as develop new social media applications. The Cranky Uncle App! Earlier this year, we successfully crowd-funded a campaign to develop the Cranky Uncle app, a smartphone game which teaches resilience against misinformation using cartoons, gamification, and critical thinking. The game will be release later this year. Critical Thinking & Parallel Argumentation An introduction to our 2018 critical thinking research published in Environmental Research Letters, and a presentation by John Cook at 2018 CSICon in Las Vegas, explaining how parallel argumentation (in cartoon form) can inoculate the public against misinformation. Support Our Work The work of Mason's Center for Climate Change Communication (4C) would not be possible without the generous financial support we have received from philanthropic foundations and individual donors. You too can support our important work by donating via a secure online donation form. Your financial contribution will be processed on our behalf by the George Mason University Foundation, and is tax deductible.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816939.51/warc/CC-MAIN-20240415014252-20240415044252-00080.warc.gz
CC-MAIN-2024-18
4,334
22
https://righttimes.tv/we-must-control-language/
code
Check out the full ‘Talking About Racism News’ Livestream: https://righttimes.tv/talking-about-racism-news/ The first thing that you do when you wage a culture war is you wage a war on words. You wage a linguistic war because language connotates understanding. Words are the way in which we think about things and they’re also the way in which we assemble our belief structures and its the way that we essentially see the world because it is the way in which we interpret the world. Which is why the first thing that these people have done in this culture war is waging the war on language. And I think that it is exceptionally important for us to realise this and look to take these words back. Or to create words of our own which have their own connotations so that we’re able to do the same that they have done to us essentially within the culture wars by pushing back against them through linguistic warfare. ♦️ Main YouTube Channel: https://tinyurl.com/y82g2pd2 ♦️ Gab TV: https://tv.gab.com/channel/RightTimes ♦️ DLive: https://dlive.tv/RightTimes.tv ♦️ Trovo: https://trovo.live/RightTimes ♦️ Instagram: https://www.instagram.com/righttimes.tv/ ♦️ Gab: https://gab.com/RightTimes ♦️ Telegram: https://t.me/RightTimesTV
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488259200.84/warc/CC-MAIN-20210620235118-20210621025118-00297.warc.gz
CC-MAIN-2021-25
1,261
10
https://sources.debian.org/src/gajim-antispam/1.4.21-2/README.md/
code
# Anti_spam Plugin for Gajim This Plugin allows you to dissociate itself from the spam. Use special plugin, that manages automatic download and installation of others plugins, it is called Plugin Installer. ### Block pubsub Block incoming messages from pubsub ### Message size limit Block incoming messages that have size more than configured. Default value -1 mean that any sized messages are coming. ### Anti spam question Block incoming messages from users not in your roster. In response, the Plugin sends a question that you configured. After correct answer(also configurable) you will receive all new messages from user. **Attention!** All messages before correct answer will be lost. Also you can enable this function, in Plugin config, for conference private messages. In some servers, the question in conference private does not reach your interlocutor. This can lead to the fact that you will not receive any messages from him, and he will not know it.
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987781397.63/warc/CC-MAIN-20191021171509-20191021195009-00234.warc.gz
CC-MAIN-2019-43
962
11
https://rpg.stackexchange.com/questions/151599/how-do-readied-actions-involving-movement-interact-with-being-charged?noredirect=1
code
On Bob's next turn, he charges Alice. The attack he would take at the end of his charge meets the trigger conditions for Alice's readied action, so she teleports away. What exactly happens? - Does Bob get to make his melee attack before Alice teleports? - Does it matter whether Alice teleports somewhere that would have been within Bob's original charge range, versus somewhere he could not have charged in the first place? - If Bob doesn't get to attack, does he still lose his entire full round action for charging? Or has he only spent a move action if he never gets to attack? What if he's already moved farther than he could have with a single move action (since you can charge twice your speed)? - Would it matter if Alice had chosen a different trigger condition for her readied action, like "a creature comes within 20 feet of me?"
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100535.26/warc/CC-MAIN-20231204214708-20231205004708-00572.warc.gz
CC-MAIN-2023-50
840
6
http://www.osnews.com/thread?569596
code
Username or EmailPassword The sad thing is not that this is apparently a thing - no, the sad thing is that people actually believe this to be true. If you believe Apple and Google really care about you as a user, you've already lost the battle. Way too pro Google. Fun list though. Would be better if it contained at least 1 or 2 negative points for google to give it balance and make it more fun (the fun comes from the surprise shift in tone when the list goes from positive to negative aspect of similar traits).
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123484.45/warc/CC-MAIN-20170423031203-00166-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
515
3
http://00y.biz/firefox-custom-buttons-addon.html
code
Development tool to program IE add-ons in C T, C. Create IE toolbar with custom buttons, explorer bars, add items to IE context menu / popup menu. Firefox custom buttons addon corrupt download or incomplete installation of Resume Maker with Career Planning software. Recommendation: Scan your PC for TOOLBAR. EXE-related files. EXE Errors Caution: We do not recommend downloading TOOLBAR. Another program is in conflict with Resume Maker with Career Planning and its shared referenced files. EXE registry corruption How To Fix TOOLBAR.john Sonmez teaches you how to create a Chrome firefox custom buttons addon Extension in virtually no time at all,toolbars in Windows 7. You can add Toolbars to your taskbar area. This is required in develop a toolbar for google chrome browser extension for business order to enable us to get the URL of the current tab to pass on to GTmetrix. Youll also notice Ive added a permissions browser toolbar development opera section that specifies that we need to have permission to access the activeTab. just follow the step-by-step guide given. By offering the right content to the right visitor at the right time, Wibiya empowers publishers to strongly connect with their readers, helping build deeper relationships while enhancing the site visitors experience. Our publishers are seeing a dramatic improvement in click-through-rate performance, leading to increased likes, shares, page views, and overall exposure. Wibiya says it. The backend user interface can be used to manage a specific lists of stores, called Campaigns. Campaigns can be sorted into most popular stores or featured stores for the current season and special events. Key Benefits of using A4C for server side development: The development of server APIs and browser extensions or mobile apps resides. Yahoo! Search: m Yahoo! Mail: m Yahoo! News: m Yahoo! Shopping: m 3 Ensure that "Show my home page" is selected from the "When Firefox starts" menu. This will load the page you set whenever Firefox starts or when you click the Home button. Your changes are saved automatically. Method 4 Edge 1 Click the. Firefox custom buttons addon! Computer freezes, and reports back every detail about your 22 software development company extensions 22 file. You can now discover everything you need to know about your 22 file. Do you want to know exactly what it is, and how to open it? Analyzes, the revolutionary 22 File Analysis Tool scans, instantly! Finally, who created. to create this toolbar, when the user presses Enter in google chrome toolbar button the toolbar above, the IDE s default browser opens and.a little harmless fun in the office! If you want to firefox custom buttons addon play a little joke on someone, then go over and make their ribbon disappear. In fact, wait until they are away from their computer, then you can be the hero and go over and fix it for them. Eventually youll hear them getting frustrated (especially if its a co-worker that sits near your desk)). In that case, you don't want to modify the source files in various www directories within the top-level platforms directory, because they're regularly replaced with the top-level www directory's cross-platform source. Instead, the top-level merges directory offers a place to specify assets to deploy on specific platforms. Each platform-specific subdirectory within merges mirrors the directory. Tumblr is a huge blogging platform where you can create and maintain a blog. Note that Firefox users will need to install Greasemonkey before installing XKit. you can add any kind of post, decide when to publish it, share it on Facebook or. This extension adds a Tumblr button to your toolbar, which gives you access to. I'm trying to create a custom browser toolbar button that will do a few things. One that I'm trying to do at the moment is just simply return the URL of whatever page the user is visiting. I wanted to create a Win32 application using T. Wolfram disclaims any warranties regarding the security, reliability, timeliness, and performance of the Gadget. You understand and agree that You access and/or use the Gadget at Your own discretion and risk, and that You will be solely responsible for any damages to Your computer system or loss of data that results from Your accessing or. You know the feeling. You open up your browser to head to your favorite website only to see an ugly toolbar taking up space. And instead of your usual home page, your browser starts with a random search page for a company you've never heard of. I tell you how to get your old home. Customize chrome toolbar 2 wibiya. Ensure that "Start with home page" is selected. You'll find this in the "Startup" section of the "General" tab. This will make your Yahoo! page(s) open whenever Internet Explorer starts. 4 Click "OK" to save your changes. Your new home page will be set, and Yahoo! will load whenever Internet. click on the double diamond icon in the address bar browser plugin services companies on the far right. And once it's loaded, allow, select. Making Gmail your default mail client in Chrome is easy. Click that, just login firefox custom buttons addon to your inbox, then Done. Next,outside the body tag, unfortunately, firefox 4.0 hides the Menu firefox custom buttons addon Bar by. If you deselect the Menu Bar, the element is not affected. Because we've appended the iframe to the root element, i want to use extension APIs in the toolbar! These menu items will be hidden. Build browser plug in windows 8 1: Google Toolbar 3.0 (February 16 packages tracking and links for ISBN numbers. WordTranslator System Requirements: Windows and Microsoft Internet Explorer. 2005) New features: SpellCheck AutoLink for U.S. Address on a web page to an Google map, Name : A short, plain text string (no more than 45 characters) that identifies the theme. description : A. I'll give you an example. Toolbar layout:?xml version"1.0" encoding"utf-8"? olbar style style/ToolBarStyle. Event" xmlns:android"m/apk/res/android" android:id id/toolbar" android:layout_width"match_parent" android:layout_height"wrap_content" android:background"?attr/colorPrimary" android:minHeight dimen/abc_action_bar_default_height_material" / Styles: style name"ToolBarStyle" parent"se style name"se" parent" item name"popupTheme" @style/ght /item item name"theme" @style/tionBar /item /style style name"ToolBarStyle. Event" parent"ToolBarStyle" item name"titleTextAppearance" @style/olbar. Title /item /style style name"olbar. Title" parent"olbar. Title"!-Any text styling can be done here- item name"android:textStyle" normal /item item name"android:textSize" @dimen/event_title_text_size /item /style share improve this answer up vote 20 down vote Here is title text dependant approach to find TextView instance from Toolbar. public. Google Toolbar is faster, sleeker and more personalized than ever before. Try. Chrome, Google s fast modern browser, to get all of the features of Toolbar and. Initially youll only have a few commands to choose from. Theyll be Properties, and g Free With close attention to detail and an unflinching commitment to quality, custom Plugin Service includes create toolbar chrome Custom Development We develop plugins by injecting features that work in perfect unison with your requirements. Redundancy does not find a. Or maybe you need to allow users to perform some action on a kiosk machine that allows access only to IE and not the Start menu or desktop. Here's how you can do things like that, and more. Step 1: Create Icons To create icon files that can be used for your custom button, you. For example, from a LinkedIn page you can collect all person's jobs, skills and education history. Direct XML, Excel and SQL multi-table output. For example, collecting products catalog with attached table of user reviews. Background data scraping using a headless WebKit browser. Scheduled execution on any interval Simultaneous processing of multiple projects. The Data Toolbar for Chrome and Firefox can run side-by-side with Data Toolbar for Internet Explorer. Report bugs and suggestions to. Apple Shopping Bag Popular Recent Categories Productivity Social Networking Security. Entertainment Bookmarking Search Tools Developer Shopping News Translation Photos URL Shorteners. RSS browser extension for business firefox Tools Other ' Install now Want to develop your own extensions? Safari Extensions are a new way for developers to enhance and customize the browsing experience. If you know how to develop web pages, then you. Note that Firefox users will need to install. Greasemonkey before installing XKit. Once download is complete, load your Tumblr dashboard to complete the installation. A New X icon will be added to your dashboard. Through here you can browse your existing extensions, customize your extensions and browse for new extensions to install. The extensions offer. Be sure to share your expertise with your fellow readers in the comments. "Toolbar manager Initialization event INITIALIZATION. Start of selection event. START -OF-SELECTION. Subroutine to get values from tstc table PERFORM fetch_data. subroutine for alv display PERFORM alv_output. CLASS lcl_alv_toolbar DEFINITION ALV event handler CLASS lcl_alv_toolbar DEFINITION. PUBLIC SECTION. Constructor METHODS : constructor IMPORTING io_alv_grid TYPE REF TO cl_gui_alv_grid, Event for toolbar on_toolbar FOR EVENT toolbar OF. (as opposed to Find problem) Again major cause of problem is MenuX and especially if caret browsing is involved. Another problem, after a bad change to a.js file in the keyconfig extension then none of the vertical scrolling worked except the scrollbar itself. Backed off the change and this is working again. Firefox will not. 2 days ago. The Trademark build your own toolbar for google Ninja, the new Apple MacBook Magic Toolbar we ve all been dreaming of seems to be a reality probably.
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320669.83/warc/CC-MAIN-20170626032235-20170626052235-00107.warc.gz
CC-MAIN-2017-26
9,885
30
https://www.systutorials.com/docs/linux/man/8-mkfs.xfs/
code
mkfs.xfs (8) - Linux Manuals mkfs.xfs: construct an XFS filesystem mkfs.xfs - construct an XFS filesystem SYNOPSISmkfs.xfs [ -b block_size ] [ -m global_metadata_options ] [ -d data_section_options ] [ -f ] [ -i inode_options ] [ -l log_section_options ] [ -n naming_options ] [ -p protofile ] [ -q ] [ -r realtime_section_options ] [ -s sector_size ] [ -L label ] [ -N ] [ -K ] device DESCRIPTIONmkfs.xfs constructs an XFS filesystem by writing on a special file using the values found in the arguments of the command line. It is invoked automatically by mkfs(8) when it is given the -t xfs option. In its simplest (and most commonly used form), the size of the filesystem is determined from the disk driver. As an example, to make a filesystem with an internal log on the first partition on the first SCSI disk, use: - mkfs.xfs /dev/sda1 The metadata log can be placed on another device to reduce the number of disk seeks. To create a filesystem on the first partition on the first SCSI disk with a 10MiB log located on the first partition on the second SCSI disk, use: - mkfs.xfs -l logdev=/dev/sdb1,size=10m /dev/sda1 In the descriptions below, sizes are given in sectors, bytes, blocks, kilobytes, megabytes, gigabytes, etc. Sizes are treated as hexadecimal if prefixed by 0x or 0X, octal if prefixed by 0, or decimal otherwise. The following lists possible multiplication suffixes: - s - multiply by sector size (default = 512, see -s - option below). - b - multiply by filesystem block size (default = 4K, see -b - option below). k - multiply by one kilobyte (1,024 bytes). - m - multiply by one megabyte (1,048,576 bytes). - g - multiply by one gigabyte (1,073,741,824 bytes). - t - multiply by one terabyte (1,099,511,627,776 bytes). - p - multiply by one petabyte (1,024 terabytes). - e - multiply by one exabyte (1,048,576 terabytes). - m - multiply by one megabyte (1,048,576 bytes). - -b block_size_options - This option specifies the fundamental block size of the filesystem. The valid block_size_options are: log=value or size=value and only one can be supplied. The block size is specified either as a base two logarithm value with log=, or in bytes with size=. The default value is 4096 bytes (4 KiB), the minimum is 512, and the maximum is 65536 (64 KiB). - To specify any options on the command line in units of filesystem blocks, this option must be specified first so that the filesystem block size is applied consistently to all options. - Although mkfs.xfs will accept any of these values and create a valid filesystem, XFS on Linux can only mount filesystems with pagesize or smaller blocks. - -m global_metadata_options These options specify metadata format options that either apply to the entire filesystem or aren't easily characterised by a specific functionality group. The - This is used to create a filesystem which maintains and checks CRC information in all metadata objects on disk. The value is either 0 to disable the feature, or 1 to enable the use of CRCs. - CRCs enable enhanced error detection due to hardware issues, whilst the format changes also improves crash recovery algorithms and the ability of various tools to validate and repair metadata corruptions when they are found. The CRC algorithm used is CRC32c, so the overhead is dependent on CPU architecture as some CPUs have hardware acceleration of this algorithm. Typically the overhead of calculating and checking the CRCs is not noticeable in normal operation. - By default, mkfs.xfs will enable metadata CRCs. - This option enables the use of a separate free inode btree index in each allocation group. The value is either 0 to disable the feature, or 1 to create a free inode btree in each allocation group. - The free inode btree mirrors the existing allocated inode btree index which indexes both used and free inodes. The free inode btree does not index used inodes, allowing faster, more consistent inode allocation performance as filesystems age. - By default, mkfs.xfs will create free inode btrees for filesystems created with the (default) -m crc=1 option set. When the option -m crc=0 is used, the free inode btree feature is not supported and is disabled. - Use the given value as the filesystem UUID for the newly created filesystem. The default is to generate a random UUID. - This option enables the creation of a reverse-mapping btree index in each allocation group. The value is either 0 to disable the feature, or 1 to create the btree. - The reverse mapping btree maps filesystem blocks to the owner of the filesystem block. Most of the mappings will be to an inode number and an offset, though there will also be mappings to filesystem metadata. This secondary metadata can be used to validate the primary metadata or to pinpoint exactly which data has been lost when a disk error occurs. - By default, mkfs.xfs will not create reverse mapping btrees. This feature is only available for filesystems created with the (default) -m crc=1 option set. When the option -m crc=0 is used, the reverse mapping btree feature is not supported and is disabled. - This option enables the use of a separate reference count btree index in each allocation group. The value is either 0 to disable the feature, or 1 to create a reference count btree in each allocation group. - The reference count btree enables the sharing of physical extents between the data forks of different files, which is commonly known as "reflink". Unlike traditional Unix filesystems which assume that every inode and logical block pair map to a unique physical block, a reflink-capable XFS filesystem removes the uniqueness requirement, allowing up to four billion arbitrary inode/logical block pairs to map to a physical block. If a program tries to write to a multiply-referenced block in a file, the write will be redirected to a new block, and that file's logical-to-physical mapping will be changed to the new block ("copy on write"). This feature enables the creation of per-file snapshots and deduplication. It is only available for the data forks of regular files. - By default, mkfs.xfs will not create reference count btrees and therefore will not enable the reflink feature. This feature is only available for filesystems created with the (default) -m crc=1 option set. When the option -m crc=0 is used, the reference count btree feature is not supported and reflink is disabled. - -d data_section_options These options specify the location, size, and other parameters of the data section of the filesystem. The valid - This is used to specify the number of allocation groups. The data section of the filesystem is divided into allocation groups to improve the performance of XFS. More allocation groups imply that more parallelism can be achieved when allocating blocks and inodes. The minimum allocation group size is 16 MiB; the maximum size is just under 1 TiB. The data section of the filesystem is divided into value allocation groups (default value is scaled automatically based on the underlying device size). - This is an alternative to using the agcount suboption. The value is the desired size of the allocation group expressed in bytes (usually using the m or g suffixes). This value must be a multiple of the filesystem block size, and must be at least 16MiB, and no more than 1TiB, and may be automatically adjusted to properly align with the stripe geometry. The agcount and agsize suboptions are mutually exclusive. - This can be used to specify the name of the special file containing the filesystem. In this case, the log section must be specified as internal (with a size, see the -l option below) and there can be no real-time section. - This is used to specify that the file given by the name suboption is a regular file. The value is either 0 or 1, with 1 signifying that the file is regular. This suboption is used only to make a filesystem image. If the value is omitted then 1 is assumed. - This is used to specify the size of the data section. This suboption is required if -d file[=1] is given. Otherwise, it is only needed if the filesystem should occupy less space than the size of the special file. - This is used to specify the stripe unit for a RAID device or a logical volume. The value has to be specified in 512-byte block units. Use the su suboption to specify the stripe unit size in bytes. This suboption ensures that data allocations will be stripe unit aligned when the current end of file is being extended and the file size is larger than 512KiB. Also inode allocations and the internal log will be stripe unit aligned. - This is an alternative to using sunit. The su suboption is used to specify the stripe unit for a RAID device or a striped logical volume. The value has to be specified in bytes, (usually using the m or g suffixes). This value must be a multiple of the filesystem block size. - This is used to specify the stripe width for a RAID device or a striped logical volume. The value has to be specified in 512-byte block units. Use the sw suboption to specify the stripe width size in bytes. This suboption is required if -d sunit has been specified and it has to be a multiple of the -d sunit suboption. - suboption is an alternative to using swidth. The sw suboption is used to specify the stripe width for a RAID device or striped logical volume. The value is expressed as a multiplier of the stripe unit, usually the same as the number of stripe members in the logical volume configuration, or data disks in a RAID device. - When a filesystem is created on a logical volume device, mkfs.xfs will automatically query the logical volume for appropriate sunit and swidth values. - This option disables automatic geometry detection and creates the filesystem without stripe geometry alignment even if the underlying storage device provides this information. - Force overwrite when an existing filesystem is detected on the device. By default, mkfs.xfs will not write to the device if it suspects that there is a filesystem or partition table on the device already. - -i inode_options - This option specifies the inode size of the filesystem, and other inode allocation parameters. The XFS inode contains a fixed-size part and a variable-size part. The variable-size part, whose size is affected by this option, can contain: directory data, for small directories; attribute data, for small attribute sets; symbolic link data, for small symbolic links; the extent list for the file, for files with a small number of extents; and the root of a tree describing the location of extents for the file, for files with a large number of extents. - size=value | log=value | perblock=value - The inode size is specified either as a value in bytes with size=, a base two logarithm value with log=, or as the number fitting in a filesystem block with perblock=. The minimum (and default) value is 256 bytes without crc, 512 bytes with crc enabled. The maximum value is 2048 (2 KiB) subject to the restriction that the inode size cannot exceed one half of the filesystem block size. - XFS uses 64-bit inode numbers internally; however, the number of significant bits in an inode number is affected by filesystem geometry. In practice, filesystem size and inode size are the predominant factors. The Linux kernel (on 32 bit hardware platforms) and most applications cannot currently handle inode numbers greater than 32 significant bits, so if no inode size is given on the command line, mkfs.xfs will attempt to choose a size such that inode numbers will be < 32 bits. If an inode size is specified, or if a filesystem is sufficiently large, mkfs.xfs will warn if this will create inode numbers > 32 significant bits. - This specifies the maximum percentage of space in the filesystem that can be allocated to inodes. The default value is 25% for filesystems under 1TB, 5% for filesystems under 50TB and 1% for filesystems over 50TB. - In the default inode allocation mode, inode blocks are chosen such that inode numbers will not exceed 32 bits, which restricts the inode blocks to the lower portion of the filesystem. The data block allocator will avoid these low blocks to accommodate the specified maxpct, so a high value may result in a filesystem with nothing but inodes in a significant portion of the lower blocks of the filesystem. (This restriction is not present when the filesystem is mounted with the inode64 option on 64-bit platforms). - Setting the value to 0 means that essentially all of the filesystem can become inode blocks, subject to inode32 restrictions. - This value can be modified with xfs_growfs(8). - This is used to specify that inode allocation is or is not aligned. The value is either 0 or 1, with 1 signifying that inodes are allocated aligned. If the value is omitted, 1 is assumed. The default is that inodes are aligned. Aligned inode access is normally more efficient than unaligned access; alignment must be established at the time the filesystem is created, since inodes are allocated at that time. This option can be used to turn off inode alignment when the filesystem needs to be mountable by a version of IRIX that does not have the inode alignment feature (any release of IRIX before 6.2, and IRIX 6.2 without XFS patches). - This is used to specify the version of extended attribute inline allocation policy to be used. By default, this is 2, which uses an efficient algorithm for managing the available inline inode space between attribute and extent data. - The previous version 1, which has fixed regions for attribute and extent data, is kept for backwards compatibility with kernels older than version 2.6.16. - This is used to enable 32bit quota project identifiers. The value is either 0 or 1, with 1 signifying that 32bit projid are to be enabled. If the value is omitted, 1 is assumed. (This default changed in release version 3.2.0.) - Enable sparse inode chunk allocation. The value is either 0 or 1, with 1 signifying that sparse allocation is enabled. If the value is omitted, 1 is assumed. Sparse inode allocation is disabled by default. This feature is only available for filesystems formatted with -m crc=1. - When enabled, sparse inode allocation allows the filesystem to allocate smaller than the standard 64-inode chunk when free space is severely limited. This feature is useful for filesystems that might fragment free space over time such that no free extents are large enough to accommodate a chunk of 64 inodes. Without this feature enabled, inode allocations can fail with out of space errors under severe fragmented free space conditions. - -l log_section_options These options specify the location, size, and other parameters of the log section of the filesystem. The valid - This is used to specify that the log section is a piece of the data section instead of being another device or logical volume. The value is either 0 or 1, with 1 signifying that the log is internal. If the value is omitted, 1 is assumed. - This is used to specify that the log section should reside on the device separate from the data section. The internal=1 and logdev options are mutually exclusive. - This is used to specify the size of the log section. - If the log is contained within the data section and size isn't specified, mkfs.xfs will try to select a suitable log size depending on the size of the filesystem. The actual logsize depends on the filesystem block size and the directory block size. - Otherwise, the size suboption is only needed if the log section of the filesystem should occupy less space than the size of the special file. The value is specified in bytes or blocks, with a b suffix meaning multiplication by the filesystem block size, as described above. The overriding minimum value for size is 512 blocks. With some combinations of filesystem block size, inode size, and directory block size, the minimum log size is larger than 512 blocks. - This specifies the version of the log. The current default is 2, which allows for larger log buffer sizes, as well as supporting stripe-aligned log writes (see the sunit and su options, below). - The previous version 1, which is limited to 32k log buffers and does not support stripe-aligned writes, is kept for backwards compatibility with very old 2.4 kernels. - This specifies the alignment to be used for log writes. The value has to be specified in 512-byte block units. Use the su suboption to specify the log stripe unit size in bytes. Log writes will be aligned on this boundary, and rounded up to this boundary. This gives major improvements in performance on some configurations such as software RAID5 when the sunit is specified as the filesystem block size. The equivalent byte value must be a multiple of the filesystem block size. Version 2 logs are automatically selected if the log sunit suboption is specified. - The su suboption is an alternative to using sunit. - This is used to specify the log stripe. The value has to be specified in bytes, (usually using the s or b suffixes). This value must be a multiple of the filesystem block size. Version 2 logs are automatically selected if the log su suboption is specified. - This changes the method of logging various persistent counters in the superblock. Under metadata intensive workloads, these counters are updated and logged frequently enough that the superblock updates become a serialization point in the filesystem. The value can be either 0 or 1. - With lazy-count=1, the superblock is not modified or logged on every change of the persistent counters. Instead, enough information is kept in other parts of the filesystem to be able to maintain the persistent counter values without needed to keep them in the superblock. This gives significant improvements in performance on some configurations. The default value is 1 (on) so you must specify lazy-count=0 if you want to disable this feature for older kernels which don't support it. - -n naming_options These options specify the version and size parameters for the naming (directory) area of the filesystem. The valid - size=value | log=value - The block size is specified either as a value in bytes with size=, or as a base two logarithm value with log=. The block size must be a power of 2 and cannot be less than the filesystem block size. The default size value for version 2 directories is 4096 bytes (4 KiB), unless the filesystem block size is larger than 4096, in which case the default value is the filesystem block size. For version 1 directories the block size is the same as the filesystem block size. - The naming (directory) version value can be either 2 or 'ci', defaulting to 2 if unspecified. With version 2 directories, the directory block size can be any power of 2 size from the filesystem block size up to 65536. - The version=ci option enables ASCII only case-insensitive filename lookup and version 2 directories. Filenames are case-preserving, that is, the names are stored in directories using the case they were created with. - Note: Version 1 directories are not supported. This feature allows the inode type to be stored in the directory structure so that the do not need to look up the inode to determine the inode type. The value is either 0 or 1, with 1 signifying that filetype information will be stored in the directory structure. The default value is 1. When CRCs are enabled (the default), the ftype functionality is always enabled, and cannot be turned off. - -p protofile If the optional argument is given, as a prototype file and takes its directions from that file. The blocks and inodes specifiers in the are provided for backwards compatibility, but are otherwise unused. The syntax of the protofile is defined by a number of tokens separated by spaces or newlines. Note that the line numbers are not part of the syntax but are meant to help you in the following discussion of the file 1 /stand/diskboot 2 4872 110 3 d--777 3 1 4 usr d--777 3 1 5 sh ---755 3 1 /bin/sh 6 ken d--755 6 1 7 $ 8 b0 b--644 3 1 0 0 9 c0 c--644 3 1 0 0 10 fifo p--644 3 1 11 slink l--644 3 1 /a/symbolic/link 12 : This is a comment line 13 $ 14 $ - Line 1 is a dummy string. (It was formerly the bootfilename.) It is present for backward compatibility; boot blocks are not used on SGI systems. - Note that some string of characters must be present as the first line of the proto file to cause it to be parsed correctly; the value of this string is immaterial since it is ignored. - Line 2 contains two numeric values (formerly the numbers of blocks and inodes). These are also merely for backward compatibility: two numeric values must appear at this point for the proto file to be correctly parsed, but their values are immaterial since they are ignored. - The lines 3 through 11 specify the files and directories you want to include in this filesystem. Line 3 defines the root directory. Other directories and files that you want in the filesystem are indicated by lines 4 through 6 and lines 8 through 10. Line 11 contains symbolic link syntax. - Notice the dollar sign ($) syntax on line 7. This syntax directs the mkfs.xfs command to terminate the branch of the filesystem it is currently on and then continue from the directory specified by the next line, in this case line 8. It must be the last character on a line. The colon on line 12 introduces a comment; all characters up until the following newline are ignored. Note that this means you cannot have a file in a prototype file whose name contains a colon. The $ on lines 13 and 14 end the process, since no additional specifications follow. - File specifications provide the following: * file mode * user ID * group ID * the file's beginning contents - A 6-character string defines the mode for a file. The first character of this string defines the file type. The character range for this first character is -bcdpl. A file may be a regular file, a block special file, a character special file, directory files, named pipes (first-in, first out files), and symbolic links. The second character of the mode string is used to specify setuserID mode, in which case it is u. If setuserID mode is not specified, the second character is -. The third character of the mode string is used to specify the setgroupID mode, in which case it is g. If setgroupID mode is not specified, the third character is -. The remaining characters of the mode string are a three digit octal number. This octal number defines the owner, group, and other read, write, and execute permissions for the file, respectively. For more information on file permissions, see the chmod(1) command. - Following the mode character string are two decimal number tokens that specify the user and group IDs of the file's owner. In a regular file, the next token specifies the pathname from which the contents and size of the file are copied. In a block or character special file, the next token are two decimal numbers that specify the major and minor When a file is a symbolic link, the next token specifies the contents of the link. When the file is a directory, the mkfs.xfs command creates the entries dot (.) and dot-dot (..) and then reads the list of names and file specifications in a recursive manner for all of the entries in the directory. A scan of the protofile is always terminated with the dollar ( $ ) token. - Quiet option. Normally mkfs.xfs prints the parameters of the filesystem to be constructed; the -q flag suppresses this. - -r realtime_section_options These options specify the location, size, and other parameters of the real-time section of the filesystem. The valid - This is used to specify the device which should contain the real-time section of the filesystem. The suboption value is the name of a block device. - This is used to specify the size of the blocks in the real-time section of the filesystem. This value must be a multiple of the filesystem block size. The minimum allowed size is the filesystem block size or 4 KiB (whichever is larger); the default size is the stripe width for striped volumes or 64 KiB for non-striped volumes; the maximum allowed size is 1 GiB. The real-time extent size should be carefully chosen to match the parameters of the physical media used. - This is used to specify the size of the real-time section. This suboption is only needed if the real-time section of the filesystem should occupy less space than the size of the partition or logical volume containing the section. - This option disables stripe size detection, enforcing a realtime device with no stripe geometry. - -s sector_size - This option specifies the fundamental sector size of the filesystem. The sector_size is specified either as a value in bytes with size=value or as a base two logarithm value with log=value. The default sector_size is 512 bytes. The minimum value for sector size is 512; the maximum is 32768 (32 KiB). The sector_size must be a power of 2 size and cannot be made larger than the filesystem block size. - To specify any options on the command line in units of sectors, this option must be specified first so that the sector size is applied consistently to all options. - -L label - Set the filesystem label. XFS filesystem labels can be at most 12 characters long; if label is longer than 12 characters, mkfs.xfs will not proceed with creating the filesystem. Refer to the mount(8) and xfs_admin(8) manual entries for additional information. - Causes the file system parameters to be printed out without really creating the file system. - Do not attempt to discard blocks at mkfs time. - Prints the version number and exits. BUGSWith a prototype file, it is not possible to specify hard links.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817289.27/warc/CC-MAIN-20240419043820-20240419073820-00409.warc.gz
CC-MAIN-2024-18
25,587
146
https://ofm.gazzetta3725.fun/
code
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together. Have a question about this project. When I plugin my controller whatever game I am playing doesn't show that my controller is plugged in as a player. Inside the controller there are some wires that connect to circuits, taking it apart and re-bonding the wires can fix the broken connection. Possibly the part of the controller you plugin has been physically altered so it doesn't fit correctly into the console port; if it isn't too bad you may be able to force the plastic back into place. Machines SL. MameCab only. All games. Mickey and Minnie Mouse wish you a Happy Birthday. Max size: xpx. Over the past decade, the President and the Congress have not only authorized drug testing by public and private employers but have also required or encouraged it in some workplaces. Their actions provide legal authority for drug testing that overrides all contrary authority except the U. S Constitution. Passing a drug test could also be required for court, a doctor visit, or to participate in college or professional sports. The state is bordered by the Indian states of Himachal Pradesh to the east, Haryana to the south and southeast and Rajasthan to the southwest as well as the Pakistani province of Punjab to the west. It is also bounded to the north by Jammu and Kashmir. Our Polaris proprietary additive system is formulated using the latest lubricant technology to protect vital engine components on startup, control moisture and deliver maximum engine protection for Polaris off-road vehicles. Only Polaris lubricants are specially developed with our engines to handle the intense conditions and high-performance requirements demanded by Polaris ATV, Ranger, General and RZR riders worldwide.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152129.33/warc/CC-MAIN-20210726120442-20210726150442-00425.warc.gz
CC-MAIN-2021-31
1,833
7
http://www.vuxml.org/freebsd/f00acdec-b59f-11e8-805d-001e2a3f778d.html
code
X11 Session -- SDDM allows unauthorised unlocking An issue was discovered in SDDM through 0.17.0. If configured with ReuseSession=true, the password is not checked for users with an already existing session. Any user with access to the system D-Bus can therefore unlock any graphical session. The default configuration of SDDM on FreeBSD is not affected, since it has ReuseSession=false. Copyright © 2003-2005 Jacques Vidrine and contributors. Please see the source of this document for full copyright
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657144.94/warc/CC-MAIN-20230610062920-20230610092920-00672.warc.gz
CC-MAIN-2023-23
502
5
https://squattheplanet.com/threads/looking-to-build-or-purchase-a-cabin-near-denver-colorado.7196/
code
- Aug 8, 2010 Hi. I am looking to either purchase or build a cabin in a fairly rural wilderness area as cheaply as possible. I don't know if this is even possible, but ideally it would be within a 30 minute drive of Denver, Colorado. Ideally it would also have a decent amount of open space where dogs could run around, either land that I could purchase cheaply, or open land that is free to use without purchase. To reply you don't need to meet all the criteria - if its further from Denver, I'll consider it - as long as its not in the middle of nowhere. Also, a place with edible fruit, vegetables, and other plants nearby would be great. Any ideas? Thanks.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392057.6/warc/CC-MAIN-20200527013445-20200527043445-00267.warc.gz
CC-MAIN-2020-24
660
2
http://fileextensionjnlp.net/
code
In this instance we come across a file extension which is in the helpful position of linking itself to only one program, which helps to minimize confusion, at least in the identification of a file based on naming conventions. The .jnlp file is used as a package that neatly bundles all the necessary information required to download and/or execute a remote application. To help us better understand the nature and purpose of this breed of file, let's first begin by outlining what Java is exactly. In short, it is a programming language used for scripting, software creation, interface enhancement, and much more. This particular language has been optimized for use and transmission over a network, specifically the world's largest network, the internet. Regtask was able to clean up my computer and speed up the performance! I highly recommend it My computer was always popping up with registry errors and I couldnt get rid of them. Regtask did the job for me and took less then 5 minutes to speed up my computer. I have to admit I'm no computer geek. When something goes wrong, I usually have to call someone to fix it and it costs me a lot of money. When my computer started to experience random freezes, I just used Regtask and now it works normally again. My computer is 5 years old and runs extremely slow now. I was a bit sceptical about using Regtask, but just decided to give it a shot in the end and it worked out fantastically for me. Michael C . In many cases, Java will work in the background to silently perform everything from rudimentary to complex tasks and functions. In other cases, Java can be used to create what is essentially a stand alone program which can be run from a website. In almost all cases, Java is implemented to facilitate some degree of user interaction, whether it be as simple as a pop up window that displays an enlarged image or a full blown internet relay chat protocol interface in a browser window. Instructions on Using Regtask to Solve Computer Problems Start Your Free File Scan Regtask Software will scan your computer system to check if it can help to speed up your computer. Scan is Done Once the scan is complete, Regtask will prompt you to fix all the errors it discovered to speed up your computer Errors are Fixed The repair process takes less then 2 minutes for 94% of users. Just click on Continue and let Regtask speed up your computer immediately! : Any Windows Version (including Vista) : 1MB (10 seconds on most Internet connections) More Info Regarding File Extension Jnlp When using a .jnlp file (which stands for Java network launching protocol), you are basically going to be dealing with a standalone program, while it is still accessed and run remotely; this is not an instance in which the code is running in the background with no user input. The .jnlp acts as a launcher and manager for various applications which are intended to be run from a certain server on the internet or a local network. While appearing as one file, it is actually an archive of multiple files, and is formatted in the XML coding language. XML, like HTML, is readable using the text editor of your choice; this can be helpful in cases when the file needs to be identified, but the user does not wish to actually execute the program. Online Java applications are called 'applets,' but unlike applets which are stored exclusively on a remote server and run from there, a .jnlp web start file is stored locally on the user's machine. Because of this, the program can be initiated directly, avoiding the necessity of visiting a specific web page in a browser for execution. If you want to open such a file, there is a requisite of having a program which will interpret the data organized in the XML file. If you have the Java runtime environment installed, then you are good to go. This is not usually installed by default on new computers, but can be downloaded and installed for free from the official Java website. If you don't have the full runtime environment installed, you have the option of downloading Java 'web start,' also available from the official Java website. Java web start acts as a launcher and manager for all your different Java based applications, and will read the data in your .jnlp file such as the location of the remote program to be run, and then use that data to execute the appropriate actions; namely, start the stand alone program. A few more examples of instances in which the user may be using a .jnlp file; there are a number of 'fantasy' sports league programs online, where people from across the world create their own team and track it as it competes with the teams of other players, based on the way the players of their team perform in real-life games, as well as more production and business oriented programs with database functionality. Should you come across a file with this extension and are unsure where it come from, a good place to start your quest for verification is with your trusty text editor. Since all .jnlp files are human readable and coded in XML, you should easily be able to ascertain the location of the server the program resides in, which is linked to in the simply coded file; its organization is similar to that of a web page. If you are not able to open said file in a text editor, it is surely not labeled correctly. Data corruption will also be easy to spot in a text editor; if in doubt, re-download, if possible from a different source.
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701154221.36/warc/CC-MAIN-20160205193914-00011-ip-10-236-182-209.ec2.internal.warc.gz
CC-MAIN-2016-07
5,447
18
https://blogs.microsoft.com/chicago/2017/04/12/chicago-city-data-user-group-the-impact-genome-predicting-social-impact/
code
The government and the social sector in philanthropy are the only places in the economy that measure impact only after a program has been funded. Every other business uses data to predict the impact. The Impact Genome is a universal evidence base to research, evaluate and predict what works in social change. The Genome spans 132 common outcomes across 11 areas of social impact. The idea is to apply the same concept used in the music genome (think of how Pandora figures out what you like) to a decision-support platform for civic funders and policymakers. It will allow practitioners to build programs using evidence “decoded” by pulling in unstructured data from academic research studies together with existing social policy. They then use that data to estimate the impact of the programs (and standardize the reporting). Among other benefits , this project will also allow for benchmarking and rationalize how resources are allocated. This short video explains it. And on April 5th, as part of our Chicago City Data User Group, we got to hear Mission Measurement founder Jason Saul talk about the genesis of this project, the data it uses, and the data it will create. Miss our meetup? Have no fear — we meet on the first Wednesday of each month. You can catch up on what you missed in the Twitter moment below:
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00612.warc.gz
CC-MAIN-2021-43
1,324
4
https://www.shreddelicious.com/2020/09/becky-baldwin-bass-lesson-fretting.html
code
Here are some exercises and a load of tips for improving your fretting hand technique. PDFs of the exercises in TAB are available to my Patreon bass subscribers ( https://www.patreon.com/beckybaldwinbass ) but I've demonstrated them quite clearly in this video so it should be possible to try them anyway. This surprisingly took a really long time to put together due to my camera cutting out for a lot of the filming! But I wanted to get it out to you as soon as possible. I'll keep working on some more video lessons and I hope they will get easier to do, as well as shorter and with better sound. Your feedback is always welcome. Thank you for watching! Bass Lesson - Fretting Finger Exercises with Becky Baldwin
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402088830.87/warc/CC-MAIN-20200929190110-20200929220110-00455.warc.gz
CC-MAIN-2020-40
715
4
http://dartfrog06mm.blogspot.com/2016/05/qin-tuple-set-of-heavy-infantry.html
code
Remember when I said I had finished the heavy infantry? I lied. I discovered I had enough left to finish another stand of Spear armed troops. As you can see, I also addressed the basing issues for these guys. I tried different approaches to basing these guys, and this is the one that seemed to work best. The Impetus list I have seen for this period still lists the infantry as all CL, but that doesn't seem to fit with the the heavily armored warriors depicted. For the spear armed infantry I went with a dense Phalanx with some open space in front of the base label. In the short term, I can fill that space with a leader stand, if necessary. Eventually I may add some more figures to the stand to fill in the open space. (If I ever get any more of these figures) |At least this way I can reflect my leader minis in basic Impetus.| |Both Spear stands.| For the halberd-ax armed units I went with a more open formation that filled the stand. and of course, a class photo
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590199.42/warc/CC-MAIN-20180718135047-20180718155047-00025.warc.gz
CC-MAIN-2018-30
972
6
https://7is7.com/otto/datediff.html
code
What is your current age in seconds? In minutes? Hours? Days? No idea! Use this date difference Calculator to find out. Or try this query: how much time has elapsed since the declaration of independence (on July 4th, 1776)? You can use this calculator to calculate the elapsed time between any two dates A.D. On October 4th, 1582 the current Gregorian calendar system, replacing the Julian calendar system, was introduced by Pope Gregory the 13th (hence the name Gregorian). The British Empire (including what is now the eastern United States), only introduced the current calendar system on September 14th, 1752. Russia took even longer and implemented it after the October revolution in 1917, which actually happened in November according to the rest of the world! Check out the related Weekday Calculator which also calculates lunar age. There is also a live countdown clock which can count down to a date in the future or show the time elapsed since a date in the past. Written by Otto de Voogd
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100484.76/warc/CC-MAIN-20231203030948-20231203060948-00369.warc.gz
CC-MAIN-2023-50
998
4
http://virtual-conf.com/
code
We understand that some presenters will not be unable to make the trip to Conference Venue to present their research paper, case study, work in progress and report, mainly due to financial and/or political restrictions on travel. The Virtual Conference has therefore instituted a virtual presentation system to allow the authors of accepted papers the same publication opportunities as regular presenters. A research works submitted without the participant attending the conference in person, but presented via video conferencing are refereed and published (if accepted) in the conference proceedings. Our main objective is to attract researchers from different fields to share their knowledge and scientific breakthroughs to a wide audience under one roof. We deeply believe that sharing information and staying up to date is a key aspect for success. Sometimes we lose time looking for a solution that others have found already!
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479885.8/warc/CC-MAIN-20190216045013-20190216071013-00197.warc.gz
CC-MAIN-2019-09
930
4
https://stat.ethz.ch/pipermail/r-devel/2011-November/062505.html
code
[Rd] Question on parsing R code from C simon.urbanek at r-project.org Wed Nov 9 02:28:41 CET 2011 On Nov 8, 2011, at 6:53 PM, KR wrote: > Simon Urbanek <simon.urbanek <at> r-project.org> writes: >> Except that you don't know what are macros, inlined functions and actual > functions. If you are careful you >> can possibly fall back to external functions but, obviously, your code will be > less efficient. I would >> still prefer including Rinternals.h - you must have a *really* good reason to > not do so ;) > Hmmm yes there are good motives (I am not completely unreasonable, yet :P) but I > could probably cope with it if there is no other way. > Regarding the rest of the e-mail, please let me be clearer on what my goal is. I > would need a function to create and initialize an R state, a function to close > the state, and a function (R_ParseVector?) that takes as input the R state and a > string (containing R code), evaluates the code and return an "error code" > (error, incomplete, done) plus (eventually) a string containing the output of > the computation. That itself is quite simple - there is an example in R-ext 5.12. > In my application I do not have any UI elements (it's console based), but I > would like calls to plot in R (and other functions using the graphic device) to > function as they would under R.exe (on windows), i.e. have persistent windows > popped up which you can resize ecc ecc. I naively thought that these graphic > capabilities came automatically with the R_ParseVector via some threading R_ParseVector doesn't evaluate anything so it's innocent here. Rf_eval() will run the actual code and it will create a window (if you use an interactive device) but the window won't respond to anything, because the moment Rf_eval() returns R has lost control and everything is up to your code. R is not threaded* (and the R API is not thread-safe) so the only way to continue is for you to run the run loop, i.e. you have to return control back to R so it can process events. Now the hard part is that running the event loop is system-dependent. You will see it discussed in R-ext 8.1 (unix) and 8.2 (Windows). * - on unix R itself doesn't use threads because it's problematic (other than OpenMP); the Windows Rgui actually uses threads cautiously so that the UI stays responsive while R is busy, but this is not done by R but the GUI. Similarly R.app GUI uses threads to monitor I/O pipes but the system loop is meshed into R. More information about the R-devel
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300253.51/warc/CC-MAIN-20220117000754-20220117030754-00612.warc.gz
CC-MAIN-2022-05
2,494
29
https://serverfault.com/questions/481500/many-domains-sites-hosted-on-same-server-cname-alternatives-to-avoid-writing-sa?noredirect=1
code
I have many sites (each one with its own domain) all on the same cPanel hosted server (let's say server IP is 22.214.171.124 and server main domain is All these domains use third party DNS (not the cPanel hosted ones), I set up the DNS of each one of these domain to point to server IP. Example of how each domain DNS is currently set: domainx.com -> A -> 126.96.36.199 domainx.com -> MX -> mail.domainx.com mail.domainx.com -> A -> 188.8.131.52 www.domainx.com -> CNAME -> domainx.com ftp.domainx.com -> CNAME -> domainx.com This situation obliges me to repeat hundreds times the server IP 184.108.40.206 one time for each domain. In the event that server IP changes I will have to go through each domain DNS to update records with new IP. So I thought why not use CNAME to avoid rewriting server IP everywhere?! I could set each domain DNS like the following: domainx.com -> CNAME -> myserver.com domainx.com -> MX -> mail.myserver.com mail.domainx.com -> CNAME -> myserver.com www.domainx.com -> CNAME -> myserver.com ftp.domainx.com -> CNAME -> myserver.com But what alternatives do I have to avoid rewriting server IP everywhere?
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704832583.88/warc/CC-MAIN-20210127183317-20210127213317-00527.warc.gz
CC-MAIN-2021-04
1,134
9
https://cboard.cprogramming.com/cplusplus-programming/85984-dev-cplusplus-where-did-you-download.html
code
I mannage to download Dev-C++,... I could compile codes and run the programme. But I just realised that It isn't working properly... Codes which work perfectly in Microsoft Visual C++ 6.0 back in my University doesn't work properly here. When the output is suppose to come out, the black screen automatically closes... Why is this?... I would assume that I didn't install properly or the right one. And this happens for every code. I tried installing it from here: http://sourceforge.net/project/showf...ease_id=307174 then realised the problem and uninstalled and reinstalled from here: http://prdownloads.sourceforge.net/d....9.2_setup.exe Where did I go wrong?
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186895.51/warc/CC-MAIN-20170322212946-00415-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
663
5
https://archaeologydataservice.ac.uk/help-guidance/guides-to-good-practice/data-analysis-and-visualisation/cad/cad-systems/cad-conventions/
code
CAD layers, naming conventions and drawing colours When constructing a CAD model, various portions of the model are placed on different layers. The layers should be designed to distinguish material in the model according to important criteria, for example, building part, building phase, site stratum, material, chronological standing, etc. Each layer should hold only a portion of the model as putting too much on a single layer may cause problems when the model is used for analysis. Objects can be moved from layer to layer, but this is harder to handle if many objects are held on a single layer. More importantly, the way that the model is segmented will affect its usefulness. Using different layers requires some system for assigning portions of the model to particular layers and a naming convention for those layers. For example, a model of an historic structure may have many layers – for phases, materials, functions, designer/builder, and so on. Potentially all models can be segmented in any number of ways. The scheme chosen should make it possible to find material according to multiple criteria and in this way the layering scheme permits users to access the layers very much as they might access parts of a database. A shortcoming of the layering systems in CAD software is that there is no facility for creating a hierarchical scheme to match hierarchical recording systems. For example, a surveyor may record the ridge board, rafters and truss beams as separate components that taken together comprise the roof of a building. This is a hierarchical system but CAD software does not allow one layer to contain other layers, thus it may be difficult to re-group the separate roof components. One way of working around this shortcoming is to use the layer or file naming conventions to create relationships between components, e.g. all names beginning with A form a set which comprises AA, AB, AC etc. Naming conventions for CAD layers It is important to adopt a systematic approach to naming layers in CAD models. CAD systems permit searches based on layer names and some systems permit searches using ‘wild-cards’ which enable retrieval of sets of layers with structured names. In complex CAD models or models comprising cross-referenced files, it is important to be able to bring together layers without causing confusions through inappropriate use of layer names. For example, users often begin with layer names like wall and door, then graduate to wall1 and door1. As the model grows, layer names grow longer, more complicated and harder to remember. Layers cannot easily be selected from the model according to their characteristics; instead a user must know all of the layer names and type in a subset when trying to select specific portions of a model. Even then it is difficult to be sure that all the relevant layers have been accessed. The layer-naming scheme should be designed and specified as early in the project as possible. As the model grows, the use of the scheme will become more and more important. In deciding upon a layer-naming scheme, CAD users have the option of either adopting an existing convention or developing their own system. In either case it is important to make sure that the naming convention is documented, can be consistently applied and allows some flexibility for modification as the model develops. The CSA layer-naming convention is an example of an existing scheme. It is a systematic naming convention that is based on layer names designed to specify the contents of each layer. Each character in the layer name designates information according to its position as well as by the letter itself (see Appendix 2). The CSA convention is a conceptual scheme that permits the layers of any CAD model to be accessed according to logical analytic categories that are meaningful and useful for a specific project. It is more general and more adaptable than a discipline-specific scheme, but works well only with programs that permit ‘wild-card’ searches for layers. Some organisations define layer-naming conventions that are designed to meet specific, practical needs, for example architects might define conventions to be used by different professionals working on a development. English Heritage (English Heritage, 2005) has developed a systematic layer-naming convention for buildings archaeology, photogrammetric recording and topographic survey. This system accommodates CAD layers produced by other professions and has some of the features of the CSA system but with a more prescriptive list of layers. Both the CSA and the English Heritage conventions have enough flexibility to be modified for specific projects. As a general rule any such changes should be systematically implemented throughout the model. With a complex scheme such as the CSA convention, the original model and layer-name models should be backed up and saved and checked once the new system has been established. Documenting the layer-naming scheme is critical to a CAD project. Such documentation should include a list of the layer names or codes with a description of each. A description of how the layer-naming scheme has been developed and how it is applied is also useful, especially with a complex scheme. One method of tying the layer-naming convention to the CAD file is actually to include the scheme as a layer within the model. Conventions for selecting drawing colours It may seem that colours should be used like layers, to specify analytic aspects of a model. For example, a specific colour might be assigned to a given structure, or to a given stratum in an archaeological site. This can be done, but different colours should not be assigned to objects on the same layer of a drawing. The objects should be placed on appropriate layers first and then a colour should be assigned to each layer. All entities on a given layer will then be the same colour. The visual result may be the same, but the process is different because the layers, not the colours, hold the analytical distinctions. There are two reasons to resist the temptation to use colours, rather than layers, to hold meaning: - It is easier to change colours than to change layers and inadvertent colour changes could result in loss of meaning - The print process generally uses colours or line weights in the model to determine the line colour or weight that is printed on paper. This means that the colours in the model may need to be changed every time a paper drawing is produced, since each tends to serve a particular purpose and emphasise different points. The danger of losing important distinctions is too great if colours have been changed, and any distinctions between portions of the model should be made using layers.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100603.33/warc/CC-MAIN-20231206194439-20231206224439-00804.warc.gz
CC-MAIN-2023-50
6,749
17
https://www.howtoforge.com/community/threads/postfixadmin-cannot-create-maildir.22055/
code
Hi, I followed the tutorial "Virtual Users With Postfix, PostfixAdmin, Courier, Mailscanner, ClamAV On CentOS" and PostfixAdmin seems to work fine. However, there seems to be a problem when I added the scripts to add/delete the mailboxes. The script are as follows Running the script on the commandline works fine but when I run it in PostfixAdmin, no folders was created at all. This is how PostfixAdmin called the script I know that the script was run as apache and I have added apache to the sudoers file, but somehow, its still not creating the mail folders. No error was return at all. Can someone please tell me what could be the problem. I have tried all permissions, even 777 but still nothing. Can someone please help me. Thanks.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964364169.99/warc/CC-MAIN-20211209122503-20211209152503-00556.warc.gz
CC-MAIN-2021-49
738
1
https://itecspec.com/spec/3gpp-23-135-6-interaction-with-telecommunication-services/
code
23.1353GPPMulticall supplementary serviceRelease 17Stage 2TS The Multicall supplementary service does not provide multiple traffic channels for speech calls. Refer to Procedure Check_OG_Multicall_MSC and Procedure Check_MT_Multicall_MSC. If Nbr_UE is greater than Nbr_SN, the mobile station may initiate an Emergency call even if Nbr_SN has been reached. When the network receives an Emergency call Setup message from the mobile station: – if Nbr_SN has not been reached, the network shall accept it regardless of Nbr_SB or Nbr_User; – if Nbr_SN has been reached, the network shall reject the emergency call setup attempt. The MS shall release one or more existing calls and it shall re-initiate an Emergency call. The MS shall ensure that an emergency call Setup request is acceptable to a serving network which does not support multicall, if necessary by releasing one or more existing calls. 6.2 Short message service 6.3 Facsimile service The Multicall supplementary service provides multiple traffic channels for facsimile service except for alternate speech and facsimile group 3. 6.4 Data circuit asynchronous The Multicall supplementary service provides multiple traffic channels for data circuit asynchronous. 6.5 Data circuit synchronous The Multicall supplementary service provides multiple traffic channels for data circuit synchronous.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474523.8/warc/CC-MAIN-20240224044749-20240224074749-00628.warc.gz
CC-MAIN-2024-10
1,352
13
https://www.uis.edu/registration/exams/
code
Final Exam Schedules - Download the Final Exam Schedule for Fall 2020 (pdf) - Download the Final Exam Schedule for Spring 2021 (pdf) The location for final exams will be in the classroom the class has been meeting in during the semester. If your class is not on the list, please contact the faculty member who has been teaching the class.
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066981.0/warc/CC-MAIN-20210416130611-20210416160611-00386.warc.gz
CC-MAIN-2021-17
338
5
https://www-10.lotus.com/ldd/dominowiki.nsf/dx/Whats_new_in_Lotus_Notes_8.5.2
code
Tour: What's new in Lotus Notes 8.5.2 The link should be fixed now. Let me know if it's still not working. Seems to be a typo in the URL. I found that if you click on the link for the full screen video (.mp4) above, then remove the "r" after ...Notes852r, it'll ask you to download the video. It seems to download and work fine. Hi, I'm sorry, but the video dosn't runs on IE8 or Chrome6 on W7 and the link to the fullscreen is broken
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668765.35/warc/CC-MAIN-20191116204950-20191116232950-00554.warc.gz
CC-MAIN-2019-47
434
4
http://www.maxconsole.com/maxcon_forums/showthread.php?187828-PSP-6.xx-PRO-Online-Client-v0.05-released
code
Online client addon for PSP by Coldbird. This is an addon for CFW which provides an infrastructure network so you can play Online some games that only have adhoc support. Another PSP addon, 'PRO Online Client' has been updated to Public Beta 0.05 (R29). This is a tool similar to Xlink Kai, and provides an infrastructure network, so you can play online in games that only have adhoc support... or do not have their official online servers operable anymore. This is the list of games supported as of now: Keep in mind the list is continuously increasing, so check out the link below for updated versions.* Ace Combat – Joint Assault * Armored Core 3 Portable * Blood Bowl * Dissidia 012 (Random Crashes) * Dungeon Siege – Throne of Agony * Modnation Racers * Outrun 2006 – Coast 2 Coast * Pangya Fantasy Golf * Split Second Velocity * Tekken 6 * Untold Legends - Brotherhood of the Blade * Untold Legends - The Warriors Code * Virtua Tennis 3 * Worms Battle Islands * Worms Open Warfare 2 And here's the official changelog and known issues: And remember: you'll need a PSP-2000 or newer, running any of the 6.xx custom firmwares!To ensure best game compatiblity, please make sure you stay updated! Outdated clients can (and most likely will) cause problems! * Enabling the UPNP Library when UPNP isn't available freezes PSP at times. Solution: Enable UPNP in your Router or remove the UPNP library (pspnet_miniupnc.prx) from the /kd Subfolder (don't forget to DMZ your PSP if you do this). * In some games the home menu is invisible. We are going to work on it. * In some games opening the home menu causes the game to detect wrong button input. We are going to work on it. Beta 0.05 [R29] * Changed Chat System to Keyboard Input * Added Experimental Matching Library Emulator (this causes a whole bunch of titles to become playable now) NEWS SOURCE #1: http://forum.prometheus.uk.to/viewtopic.php?f=2&t=23 NEWS SOURCE #2: http://www.qj.net/psp/homebrew-appli...ient-v005.html Our thanks to 'Gauss' for this 'homebrew' news update!
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699798457/warc/CC-MAIN-20130516102318-00040-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
2,037
31
https://forum.netgate.com/topic/85504/routing-from-wan-to-lan
code
Routing from WAN to LAN I have a PFSENSE box that is routing traffic like this :- Domain –-- PFSense ----- Smoothwall Filter ------- Internet It's working fine, I can ping the gateways from each interface and the internet is working across VLANS and the LAN , however I can't seem to ping anything on the domain from the WAN interface. I need access to Active Directory from the Smoothwall box to allow Authentication - how would I do this ? LAN address is 192.168.5.80 - Smoothwall Gateway is 192.168.110.1 (WAN IP is 192.168.110.2) What I'm trying to achieve is one PFSENSE box as a router instead of a layer 3 switch for internal lan and vlans to smoothwall Eventually it will be LAN -> PFSENSE Router -> Smoothwall -> PFSENSE Firewall WAN is set to ignore private addresses by default, so it's not going to respond to your Smoothwall unless you uncheck that via (Interfaces - WAN). Are you using pfSense as a router only (firewall disabled) or is the firewall still active? To get access to your DC, you could add a WAN rule that allows the Smoothwall to have full access to the DC. Yes the firewall is still active but I have rules to allow all traffic (IPV4* LAN/WAN/VLAN * * * *) on each interface What would the rule look like ? And would it be easier to disable the firewall ? Thanks for the help What would the rule look like ? It would look like a Pass rule with your Smoothwall as the Source and the DC as the Destination. Ports depend on your Windows Server version, but likely 49152-65535 if you want to limit access to just domain services. And would it be easier to disable the firewall ? It's certainly easy, but I don't know how it would perform for you. Try it. System - Advanced - Firewall/NAT - Disable firewall. Thanks, I'll give it a go - nearly there it's just this last hurdle :) I removed the firewall role and still nothing, I can ping the DC from the LAN interface but I can't from the WAN interface (full packet loss) I must be missing something somewhere! seems I was, seeing as this is in a non production environment I needed to add the gateway to the DCs (had to slap myself there…) Maybe post screencaps of your interface details. I know I started another thread but, I recreated the box and kept it simple as possible I see John's made more progress so I'll abandon this thread.
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660829.5/warc/CC-MAIN-20191015231925-20191016015425-00200.warc.gz
CC-MAIN-2019-43
2,317
22
http://www9.open-std.org/JTC1/SC22/WG14/www/docs/n1467.htm
code
Document: WG14 N1467 Submitter: Fred J. Tydeman (USA) Submission Date: 2010-05-10 Related documents: N1428 Background: There appear to be contradictory requirements in C99 on ilogb. C99 188.8.131.52 The ilogb functions has: ilogb(x) is (int)logb(x). If the correct value is outside the range of the return type, the numeric result is unspecified. C99 F.10.3.11 The logb functions has: logb(+/-0.0) is -infinity. logb(+/-infinity) is +infinity. In addition, logb(NaN) is NaN is implied by F.10#11. None of +/-infinity nor NaN are representable in int. So, that implies F.10.3.5#2 If the correct result is outside the range of the return type, the numeric result is unspecified and the "invalid" floating-point exception is raised. would apply. But, 184.108.40.206 has specific return values for ilogb(zero), ilogb(infinity), and ilogb(NaN). So, we really have correct result outside the range of the return type (which raises invalid), but with specified return values. Add to F.9.3.5 The ilogb functions, a new bullet: -- ilobg(x) raises the "invalid" floating-point exception for x being a NaN, infinity, or zero and has a return value specified in 220.127.116.11. Add to Raionale: Since integer types do not have representations of NaN or infinity, ilogb(x) for x being a NaN, infinity, or zero, has return values that cannot be represented. Normally, that would result in an unspecified return value, but ilogb has required return values for those specific cases. The committee does not know of any hardware that has a return value for a finite non-zero value that exceeds the range of int.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00495.warc.gz
CC-MAIN-2023-50
1,593
20
http://www.deezire.net/showthread.php/6011-Saboteur-Logic?p=77967&viewfull=1
code
i think only deezire can awser this I'm working with the Spy in ReGeneration and so far the Saboteur code seems to be the most interesting to use for this. I think I know which condition triggers which module: (activated with FS_POWER KindOf?) (activated with FS_SUPPLY_DROPZONE KindOf?)Code:Behavior = SabotagePowerPlantCrateCollide SabotageTag_01 BuildingPickup = Yes SabotagePowerDuration = 30000 End (activated with FS_SUPERWEAPON KindOf?)Code:Behavior = SabotageSupplyDropzoneCrateCollide SabotageTag_02 BuildingPickup = Yes StealCashAmount = 800 End (activated with COMMANDCENTER KindOf?)Code:Behavior = SabotageSuperweaponCrateCollide SabotageTag_03 BuildingPickup = Yes End (activated with FS_SUPPLY_CENTER KindOf?)Code:Behavior = SabotageCommandCenterCrateCollide SabotageTag_04 BuildingPickup = Yes End (activated with FS_FACTORY KindOf?)Code:Behavior = SabotageSupplyCenterCrateCollide SabotageTag_05 BuildingPickup = Yes StealCashAmount = 1000 End (activated with FS_FAKE KinfOf?)Code:Behavior = SabotageMilitaryFactoryCrateCollide SabotageTag_06 BuildingPickup = Yes SabotageDuration = 30000 End (activated with FS_INTERNET_CENTER KindOf?)Code:Behavior = SabotageFakeBuildingCrateCollide SabotageTag_07 BuildingPickup = Yes End But even then I don't know what all of these modules do. They all have BuildingPickup = Yes, but I have no idea what that's for. I'm guessing the power plant and internet center get powered down, and the supply centers have money stolen. But what about the others? Does anyone have any experience with this?Code:Behavior = SabotageInternetCenterCrateCollide SabotageTag_08 BuildingPickup = Yes SabotageDuration = 15000 End i think only deezire can awser this Doesn't the Saboteur just power down all the structures except the one's where he takes money? Well I don't know, I didn't play ZH that much before I started on ReGen. But if that's so, then isn't it a dead giveaway for a fake building if you disable it when you should get money? ^ I enabled them all in ProGen. They all have BuildingPickup because the modules on the Saboteur are handled in exactly the same way as crate pickups like money - see the Supply Drop Zone crate for another example. PowerPlantCrateCollide shuts down structures with FS_POWER KindOf. SupplyDropzoneCrateCollide steals cash from structures with FS_SUPPLY_DROPZONE KindOf (Im sure it also works with FS_BLACK_MARKET KindOf too). SuperweaponCrateCollide resets the timers on superweapons on structures with FS_SUPERWEAPON KindOf. SabotageCommandCenterCrateCollide resets all Generals Abilities on structures with COMMANDCENTER KindOf. FakeBuildingCrateCollide instantly destroys the structure if it has FS_FAKE KindOf. SupplyCenterCrateCollide steals cash from structures with FS_SUPPLY_CENTER KindOf. MilitaryFactoryCrateCollide shuts down structures with FS_FACTORY KindOf. InternetCenterCrateCollide shuts down structures with FS_INTERNET_CENTER KindOf. The reason why different structures were given different collision condition modules is because you dont want the Saboteur stealing the same amount of money from any structure (they all produce different amounts of money at different rates) so this way you can specify the amount to steal on a per-KindOf basis - the same goes for shutting things down. Ah thanks for clearing that up. At least I was right on the KindOfs... So SuperweaponCrateCollide and CommandCenterCrateCollide do the same, but to different KindOfs? And regarding crates, it would be possible to create almost any effect with collision, by using the UnitCrateCollide, right? 'Cause I'm wondering if I can enable the "infiltrate radar to see all enemy units" logic from the RA1 spy. The only problem is that UnitCrateCollide isn't meant for one specific structure like the others are... With respect to what you said about the BuildingPickup, is it possible to make the Saboteur go near a building, spend some time, disable it, then be available for another go? ^^^^^Deezire's post. I presume that you listed all the saboteur can do. There isn't anything else? HELL YEAH! . I never knew those "CrateCollide" tagz even existed. I must not have gotten to that INI yet with organizing detail......oh, that's why. I haven't made it to the "GLAInfantry.ini" yet. HAH! I'm wipeing that file CLEAN...... Anywho, for Radar, could you somehow make a new "KindOf"? Or is that coded in a eXe or somethin? Wish I knew Reverse Engineering... I played the ZH USA missionz to get a feel for all the new shit and went right to work on Cloning and Organizing my work from Generalz. Right now I know my way around most my INI'z, like the pixelz on my screen.Well I don't know, I didn't play ZH that much before I started on ReGen P h e n o X I don't believe there is any way to add a new KindOf through the INI files. But you could use BOAT as it isn't used for anything terrible important.Originally Posted by PhenoX";p="112499 The letter 'S' doesn't want to hurt you. It only wants to be your friend.Originally Posted by PhenoX";p="112499
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00027-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
5,024
33
https://awesome-architecture.com/iaas/terraform/
code
- Terraform for Beginners + Labs - Full Course - Infrastructure as Code with Terraform, Azure DevOps, Azure, Github, Docker and .Net 5 - Azure DevOps: Provision API Infrastructure using Terraform - Full Course - Complete Terraform Course - From BEGINNER to PRO! (Learn Infrastructure as Code) - minamijoyo/tfedit - A refactoring tool for Terraform - gruntwork-io/terratest - Terratest is a Go library that makes it easier to write automated tests for your infrastructure code. - twzhangyang/RestAirline - DDD+CQRS+EventSourcing+Hypermedia API+ASP.NET Core 3.1+Masstransit+terraform+docker+k8s
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949181.44/warc/CC-MAIN-20230330101355-20230330131355-00433.warc.gz
CC-MAIN-2023-14
592
7
https://svai22.ru/net-framework-not-updating-xp-9418.html
code
Net framework not updating xp NET Framework 3.5 can be used to run applications built for . NET Framework installations and uninstallations Install the . NET Framework is preinstalled, see System Requirements. NET Framework are in-place updates, you cannot install an earlier version of the . NET Framework 4.x on a system that already has a later version installed. You can download the free tool from Microsoft directly. The cumulative update KB4038782 for Windows 10 Version 1607 and Windows Server 2016 caused issues on some systems as well. NET Framework 4 Client Project to its original state" and click Next. Instructions for this tedious task--which entails not one but two restarts--are provided on the Microsoft Support site. NET Framework 3.5 in Windows 8 or Windows Server 2012. NET Framework are installed on a system, see How to: Determine Which . See System Requirements for supported operating systems. If you are on Windows 7 and have not yet installed Service Pack 1, you will need to do so before installing the . For information on installing Windows 7 SP1, see Learn how to install Windows 7 Service Pack 1 (SP1). Also see How to obtain the latest version of the Windows Update Agent to help manage updates on a computer on the Microsoft Support website. See System Requirements for supported operating systems. NET Framework on Windows 7, this message typically indicates that Windows 7 SP1 is not installed. requires a full release of the operating system or Server Core 2008 R2 SP1. NET Framework 4.5 or its point releases fails with a 1603 error code or blocks when it's running in Windows Program Compatibility mode. A recent Windows security update failed to install on my Windows 7 laptop. The Windows sign-off indicated the operating system was installing an update before shutting down, but the patch never installed. (Note that on my laptop the update took several minutes to install.) If the update continues to play hard to get, Microsoft recommends that you uninstall all versions of .
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401583556.73/warc/CC-MAIN-20200928010415-20200928040415-00535.warc.gz
CC-MAIN-2020-40
2,019
12
https://stackoverflow.com/questions/46212623/why-tcp-connect-termination-need-4-way-handshake
code
If this is observed from the angle of coding, it is more reasonable to have 4 way than 3 way although both are possible ways for use. When a connection is to be terminated by one side, there are multiple possibilities or states that the peer may have. At least one is normal, one is on transmitting or receiving, one is in disconnected state somehow all of a sudden before this initiation. The way of termination should take at least above three into consideration for they all have high probabilities to happen in real life. So it becomes more natural to find out why based on above. If the peer is in offline state, then things are quite easy for client to infer the peer state by delving into the packet content captured in the procedure since the first ack msg could not be received from peer. But if the two messages are combined together, then it becomes much difficult for the client to know why the peer does not respond because not only offline state could lead to the packet missing but also the various exceptions in the procedure of processing in server side could make this happen. Not to mention the client needs to wait large amount of time until timeout. With the additional 1 message, the two problems could become more easier to handle from client side. The reason for it looks like coding because the information contained in the message is just like the log of code.
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988725.79/warc/CC-MAIN-20210506023918-20210506053918-00299.warc.gz
CC-MAIN-2021-21
1,386
6
https://todoist.com/Support/show/30565/
code
Still struggling with email tasks I've just installed the newest plugin for Outlook 2013 and I'm having a similar problem to the old one with regards to attaching emails to tasks. I attached the email to a task, then moved the email to a new folder to test it - everything worked great. The search worked. I then closed outlook, reopened and tested, still worked. Then I started using the windows desktop app, and clicked the outlook task and it failed to open the email. This would be ok if the email only opened in the outlook plugin, however, when I went back to the outlook Todoist, I found the email/task link now completely broken - nothing happens when I click. Any suggestions? It's such a good idea/helpful system, but still seems unreliable.
s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738008122.86/warc/CC-MAIN-20151001222008-00242-ip-10-137-6-227.ec2.internal.warc.gz
CC-MAIN-2015-40
751
3
https://www.youthvoices.live/2018/04/10/black-lives-matter-7/
code
The issue I am focusing on is police violence toward blacks especially when the victim was unarmed. What’s worse is the violence eventually leads to the death of the victim. I wanted to focus on this topic because I see often on social media and people sharing stories or videos like this. This made me want to focus on this topic because of how common it is making me comfortable doing it because I see it so often. I specifically focused on blacks because most videos I’ve seen are people of color in these situations therefore, focused on black to show how police are targeting a specific color. Before this project, I did not really know stats and how bad the situation was. All I knew was it happened pretty often because of news and videos on social media. I knew blacks were the majority getting killed unarmed compared to the population to other races. I knew people did try to help the problem by protesting about black lives matter which did help the issue a bit, but the problem still seemed like it was really common. I also saw most situations like this where it was a white police targeting a black however, never really saw black cops treating blacks in this way. You see a lot of videos of people recording police violence while screaming how it is being recorded because it happens so often and no one ever does anything about it, it is being filmed a lot so it can be used as proof. After doing some research, I realized how much percentage of deaths by cops were blacks who had no weapon on them. The percentage was much higher than I expected and it was higher than all the other races. The number of people getting killed unarmed might not be the highest for blacks, however, compared to the total, blacks do have the highest percentages. I noticed this situation in the police force where a police named Eric Garner got killed for selling cigarettes. At first I thought police would treat others within the same force better however, after seeing this, I came to a realization how they probably have this hate towards all people of color. I thought it was a matter of power for police when they treat people of color with violence because police obviously have more power than citizens however, after the Eric Garner situation I cycle of thought changed. Black lives matter by Simon is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247518425.87/warc/CC-MAIN-20190222135147-20190222161147-00355.warc.gz
CC-MAIN-2019-09
2,395
4
https://www.experts-exchange.com/questions/28961196/Exchange-2010-Alias-Names.html
code
I have a question concerning the use of Aliases in Exchange 2010. We have a user who has two mailbox accounts, where the old one will be marked for deletion (account name and domain name changed here for obvious reasons): Old mailbox account - firstname.lastname@example.org New mailbox account - email@example.com We need to setup the old mailbox account above as an alias on the new account firstname.lastname@example.org, such that any emails sent to the old address will be sent to the email@example.com. 1. Will the alias work if the old mailbox account is disabled? 2. Will the alias work if the old mailbox account is deleted? 3. It seems the old SMTP address firstname.lastname@example.org cannot be added as an alias, since it claims the account still exists? Thanks in advance
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016853.88/warc/CC-MAIN-20220528123744-20220528153744-00691.warc.gz
CC-MAIN-2022-21
786
9
http://www.webassist.com/forums/post.php?pid=174865
code
When to expect your assistance? Hay just trying to make sure i touch base with when to expect you guys as i dont want to miss my window of oppertunity here. My computer is at home and i am working on a different company of mine currently so i need a little heads up. My contact info: My skype address is erik.dahouse1 My cell Phone: 763-516-0494
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255251.1/warc/CC-MAIN-20190520001706-20190520023706-00106.warc.gz
CC-MAIN-2019-22
345
6
https://forum.newsblur.com/t/how-does-newsblur-retrieve-the-text-view/4767
code
I’m seeing some problems with the text view from the Reuters World News feed being replaced by weird opinionated diatribes from people clearly not affiliated with Reuters. Think lunatic fringe. This text is completely different from both the short feed text, and the actual article on the Reuters website. Does NewsBlur use some external service fetch external articles, or to parse the text out of them? In other words, is there some third party between NewsBlur and Reuters that could be doing this?
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572127.33/warc/CC-MAIN-20220815024523-20220815054523-00543.warc.gz
CC-MAIN-2022-33
503
3
https://issues.apache.org/jira/browse/KAFKA-15533
code
We want to ensure ConsumerGroupHeartbeatRequest is as lightweight as possible, so a lot of fields in it don't need to be resend. An example would be the rebalanceTimeoutMs, currently we have the following code: We should encapsulate these once-used fields into a class such as HeartbeatMetdataBuilder, and it should maintain a state of whether a certain field needs to be sent or not. Note that, currently only 3 fields are mandatory in the request: Note that on retriable errors and network errors (ex. timeout) a full request should be sent to the broker.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474676.26/warc/CC-MAIN-20240227121318-20240227151318-00583.warc.gz
CC-MAIN-2024-10
557
4
http://workshopcihai2017.doc.ic.ac.uk/
code
This workshop will be held at IVA 2017 in Stockholm on August, 27th. The aim is to bring together researchers from a variety of fields interested in the study of conversational interruptions in multimodal human-human, human-agent (both virtual and robotic) or agent-agent interactions. Our aim is to address current challenges in this area (as well as identifying new ones) and to set a research agenda to make IVAs capable of believably react and adapt to unexpected situations such as conversational interruptions. Topics of interests include (but are not limited to): In this one day workshop, authors of accepted papers will present their work during the morning. In the afternoon we will break out in small groups and participate in an interactive demo session offered by participants. The workshop will terminate with a final discussion panel. We invite two types of submissions: Submissions should be sent by email to angelo (dot) cafaro (at) isir (dot) upmc (dot) fr. |Full paper submission deadline| |28 July 2017||Demo description (1 page) submission deadline| |Full paper authors notification| |Full paper camera-ready deadline| |25 July 2017||Early registration deadline| |13 August 2017||Registration deadline| |27 August 2017||Workshop| |09.30 - 09.50||Interrupting Giver||Eli Pincus |09:50 - 10:10||Interruptions as Speech Acts||Peter Wallis |10:10 - 10:30||Modeling The Impact Of Action Tendency On An Agent Interrupting Behavior||Mathieu Jégou |11.00 - 11.20||Better Faulty than Sorry: Investigating Social Recovery Strategies to Minimize the Negative Impact of Failure in Human-Robot Interaction |11:20 - 11:40||Using Contextual Knowledge to Resume Human-Agent Conversations when Programing the Intelligence of |11:40 - 12:00||Silence, Please! Interrupting In-Car Phone Conversations||María Soledad López Gambino| IVA 2017 is the 17th meeting of an interdisciplinary annual conference and the main leading scientific forum for presenting research on modeling, developing and evaluating intelligent virtual agents (IVAs) with a focus on communicative abilities and social behavior. IVAs are interactive digital characters that exhibit human-like qualities and can communicate with humans and each other using natural human modalities like facial expressions, speech and gesture. They are capable of real-time perception, cognition, emotion and action that allow them to participate in dynamic social environments. In addition to presentations on theoretical issues, the conference encourages the showcasing of working applications. More information at http://iva2017.org/. Pierre and Marie Curie University angelo (dot) cafaro (at) isir (dot) upmc (dot) fr Imperial College London University of Liverpool e (dot) coutinho (at) imperial (dot) ac (dot) uk patrick (dot) gebhard (at) dfki (dot) de blaise (at) cereproc (dot) com
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991207.44/warc/CC-MAIN-20210514183414-20210514213414-00403.warc.gz
CC-MAIN-2021-21
2,845
32
https://community.nintex.com/t5/Archived-K2-Forum-Posts/Checkbox-opens-text-field/m-p/169653
code
Is there a way to get a specific checkbox value to open a text box that can be used to enter an "other" value. Choice as check boxes: If user checks Option 1, Option 3, Other I want the text box to open so they can enter the other value. If I have my logic correct I when creating my list item I would map the CheckBox field and then ";" and then my TexBox field and that should give me the contolled input Option 1; Option 3;Other;[user inputed value] The problem I am running into is that unless the only box checked is other I cant get the "Other" text box to show hide as needed. Solved! Go to Solution. I tried some testing in my environment. I could not find a way to show/hide a text box based on the changing of an single checkbox in a "checkbox list". I was able to be successful with a single checkbox. I have uploaded a screenshot of my rules that demonstrate one way to set this up. With this design, when the Other CheckBox (CB) is changed, if the Other CB contains True (Checked), then we show the Text Box control. However, if the Other CB contains False, then we hide it. I was also able to find another community post where they suggested using the "Table" control under layout to work with the "CheckBox List". You may want to try some of these steps also to see if they better fit your desired functionality. Thank you. I have considered that as an option but I would really prefer to be able to use the Other option in a check box list. Logicially the "contains" operator should operate differently than the "specific value" operator but it appears that they both funciton the same way which is a bit disapointing.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104690785.95/warc/CC-MAIN-20220707093848-20220707123848-00484.warc.gz
CC-MAIN-2022-27
1,634
9
https://docs.aporia.com/api-reference/custom-metric-definition-language
code
Custom Metric Definition Language In Aporia, custom metrics are defined using syntax that is similar to python's. There are three building blocks which can be used in order to create a custom metric expression: - Constants - a numeric value (e.g. - Functions - out of the builtin function collection you can find below (e.g. count, ...). All those functions return a numeric value. - Binary operation - **. Operands can be both constants or function calls. Before we dive into each of the supported function, there are two general concepts you should be familiar with regarding all functions - field expressions and data segment filters. A field expression can be described in the following format: Field category is one of the following: actuals. Note that you can only use categories which you defined in you schema while creating your model version. In addition, don't forget that actualscategories have the same field names. The segment filter is optional, for further information about the filters read the section below. Data segment filters are boolean expressions, designed to reduce to a specific data segment the field on which we perform the function. Each boolean condition in a segment filter is a comparison between a field and a constant value. For example: [features.Driving_License == True] // will filter out records in which Driving_License != True [raw_inputs.Age <= 35] // will only include records in which Age <= 35 Conditions can be combined using orand all fields can be checked for missing values using is not None. The following describe the supported combinations: The table cells indicates the type we can compare to. // Average annual premium of those with a driving license sum(features.Annual_Premium[features.Driving_License == True]) / prediction_count() // Three time number of prediction of those who are under 35 years old and live in CA prediction_count(raw_inputs.Age <= 35 and raw_inputs.Region_Code == 28) * 3 prediction_count(features.Age > 27) / (sum(features.Annual_Premium) + sum(features.Vintage))
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710662.60/warc/CC-MAIN-20221128203656-20221128233656-00027.warc.gz
CC-MAIN-2022-49
2,043
28
https://www.informit.com/articles/article.aspx?p=454783&seqNum=6
code
A World of Help at Your Fingertips ASP.NET is a rich, robust web development technology built on a platform known as the .NET Framework. This platform consists of hundreds of classes that provide the core functionality of the ASP.NET engine. Needless to say, it can take years to have a deep understanding of the framework and its capabilities. Fortunately, Visual Web Developer provides a variety of documentation and help. The version of Visual Web Developer you have installed on your computer includes the MSDN Library for Visual Studio Express editions. The MSDN Library is Microsoft’s colossal collection of articles, whitepapers, technical documentation, knowledge base content, and frequently asked questions and answers. To view the library, simply go to the Help menu and choose Search, Contents, or Index. This will launch the Microsoft Visual Studio 2005 Express Editions Documentation program, from which you can poke through the help. Another neat feature of Visual Web Developer is its Dynamic Help. From the Help menu, select the Dynamic Help option. This will display the Dynamic Help window (which you can resize, position, and pin just like any other window). As its name implies, the Dynamic Help window shows context-sensitive help based on where your cursor is in the source code or HTML portions. For example, if you’re in the Design view and you click on a Button Web control in your page, the Dynamic Help window will automatically display help links with titles like - Button Web Server Control Overview - How to: Add Button Web Controls to a Web Forms Page - How to: Add ImageButton Web Controls to a Web Forms Page There’s also the Community menu, which contains menu items like Ask a Question and Check Question Status. These menu options plug into Microsoft’s online forum site, which you can access directly through a web browser by going to http://forums.microsoft.com/msdn/. In addition to the Microsoft online forum site, the Community menu item has links to other developer resource sites and tools to assist in searching online for answers, templates, samples, and controls.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.5/warc/CC-MAIN-20220123172206-20220123202206-00030.warc.gz
CC-MAIN-2022-05
2,118
9
https://www.peerspot.com/products/oracle-bpm-alternatives-and-competitors
code
We use an open-source version of this product. In some cases we install on-premises, in some cases, we install on Dockers. How we install the product depends on the use cases and the needs of the projects that we engage in. One client may be in logistics. Another client may be involved with internal communication. Another one is in retail. Some will be in business project management. We have to treat each of these to fit their unique needs.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305242.48/warc/CC-MAIN-20220127072916-20220127102916-00413.warc.gz
CC-MAIN-2022-05
444
2
https://www.oxfordcc.co.uk/our-thinking/going-for-gold/
code
In sports like running, athletes strive to complete the fastest lap in the hope that they can win Olympic gold. At Oxford Computer Consultants, high performance is just as important. In computing, as in running, high performance is about using the available resources to achieve a specific goal. Computing performance is often measured by the time it takes to make complex calculations or process large amounts of data. Software performance can be improved by scaling up and scaling out. Scaling up involves Choosing an everyday carry flashlight by adding extra CPU power, expanding memory or using alternative disk technologies. Scaling out is a more complex and challenging technique. It involves making the software work in a way that is closer to how the human brain works. To do this, engineers use parallel processing – taking a single, large operation that will take a long time to complete and breaking it down into a number of smaller processes. In multi-processor systems, these processes can be completed in parallel. The execution of multiple processes requires careful management: the outcome of one process may depend upon the outcome of another and processes compete for resources. An ‘intermediary’ is used to allocate resources among competing processes. To refine the allocation, each process is assigned a priority. The intermediary can then make decisions based on need, such as diverting a resource allocated to one process in order to allocate it to another with a higher priority. In a simple example, a user interface process may be given priority to avoid the user having to wait. Unlike on the track, where the winner is the first to cross the finishing line, user perceptions can have a part to play in judging software performance. After all, a user may have different expectations of the time it takes to buy a book from Amazon, compared to completing a complex calculation at work. This means that a winning performance may come down to managing the user’s expectations and then building an interface that matches their needs. This could be something as simple as providing an on-screen egg timer to show the user how long a task has left. In future, OCC engineers – and runners – may find the high performance they require through smart thinking. In software terms, this means thinking about the algorithms that do the calculations in the first place.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301063.81/warc/CC-MAIN-20220118213028-20220119003028-00068.warc.gz
CC-MAIN-2022-05
2,394
10
https://rdrr.io/github/deepankardatta/blandr/man/blandr.output.report.html
code
Generates a report for the Bland-Altman statistics using rMarkdown and Shiny. A list of numbers for the first method A list of numbers for the second method Use the function to generate a report. You can also take the .Rmd file to customise it and create your own report. Or use rMarkdown to save the contents. I couldn't add this to the function as it's not allowed in CRAN. On the otherhand a full Shiny app would take too long. So this is a stop-gap way of creating this function. Hopefully I can improve it in the future Deepankar Datta <[email protected]> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 # NOT RUN # Generates two random measurements # measurement1 <- rnorm(100) # measurement2 <- rnorm(100) # blandr.output.report( measurement1 , measurement2 ) # # Use this to manually run the rmarkdown template # However specify where the template is # Also define your methods as method1 and method2 exactly # For a reason I can't fathom (or how the list of parameters is constructed) # not naming them method1 and method2 makes them invisible to the rMarkdown document # # rmarkdown::run( file = "blandr_report_template.Rmd" , # render_args = list( runtime = "shiny" , # params = list( method1 = method1 , # method2 = method2 ) ) ) # END OF NOT RUN Add the following code to your website. For more information on customizing the embed code, read Embedding Snippets.
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659056.44/warc/CC-MAIN-20190117163938-20190117185938-00449.warc.gz
CC-MAIN-2019-04
1,372
12
https://boardsandbees.wordpress.com/2021/05/11/1055/
code
Today, I wanted to look at a game that is currently on Kickstarter called Vivid Memories is an upcoming game designed by Mattew Dunstan and Brett J. Gilbert, to be published by Floodgate Games. It’s a game about connecting various childhood memories in your brain, and trying to create new pathways to score. The game comes with recessed boards that represents the brain. There are also a bunch of colored fragment tokens that will be used to make connections. Most of these go in a bag, though some are set aside in a supply. In addition to a player board, each player will begin with an Aspiration tile, which is an endgame scoring condition. There are also 20 Moment tiles that are shuffled and placed in a stack. The game is played over the course of three rounds. Each round has four phases: Prepare, Remember, Reflect, Reward. PREPARE: You’ll draw as many Moment tiles as there are players plus two (so 4-5-6 with 2-3-4 players), and set them up in a line. Each Moment tile has an action side and a scoring side, and you want the action side up at this point. You’ll draw 4-5 fragments from the bag for each Moment, depending on the player count. REMEMBER: The start player can take 1, 2, or 3 fragments from a Moment at the end of the line. If you take three, all three must be different colors. If you take two, both must be the same color. If you take one, you get to do a rewire action – more on that in a moment. If you happen to empty a Moment tile and still need more pieces, you can continue taking from the next Moment in line. An emptied Moment is claimed by the player who emptied it. All fragments you took are now placed in a single empty hex on your player board. If you only placed one, you can do a rewire action, which allows you either move all tokens from one hex into adjacent hexes (you can split them up) or move fragments into a single adjacent hex. You need to keep in mind, however, that no hex can contain more than three fragments at a time. REFLECT: Any Moment tiles you claimed are placed on your Memory Bank, which is four slots at the top of your board that give different actions. Covering up an action makes it unaccessible to you later, so be careful with that. Once all Moments are placed, take the actions in any order you want, flipping each over once used. The actions printed on the board are available if visible. These actions include adding a specific fragment to your board, change a fragment into two others, combine two fragments into one, draw a random fragment from the bag, move a fragment into an adjacent space, or swap two adjacent fragments. REWARD: This is the scoring phase, and is done in four steps: - Moments – Score each Moment tile in your Memory Bank. These will show a pattern of two or three fragments. Each hex that matches that pattern exactly (no extra fragments for the 2-fragment patterns) scores points. If you successfully score a Moment, move it off your player board, setting it to the side. If not, leave it there – this will limit your actions in the next round. - Connections – Around the border of the map are little colored lines. These are attached to the Core Memories, which are spaces on the edge of the board. If you have connected two or more of these with fragments that are the same color, you score points equal to the number of spaces in the thread times the number of empty Core Memories that are actually connected. In the example up above, the player has connected two purple Core Memories with three fragment spaces. This means purple scores them six points. You can use as many spaces as you want, you don’t have to be in a straight line. But once you have scored a connection, take the fragments from the ends and place them in the empty Core Memory slots on the edge. This means that connection won’t score again. - Core Memories – You’ll score for completed Core Memories. One point for a single, four points for a double, eighteen points for a triple. All slots in a Core Memory must be full to score. - Aspirations – This is only done at the end of the third round. You’ll reveal the Aspiration tile you got at the beginning, which shows a color. You’ll get one point per fragment of that color you have on your board, two points for each fragment of that color that is in a Core Memory, and five points per Moment you scored that has that color fragment in it. After three rounds, the game is over, and the player with the high score wins. I need to put in a bit of a disclaimer here. I did help with playtesting this game. I don’t think that’s a secret, my name is right there in the rulebook. I am not getting any kind of compensation from Floodgate for writing this post, it’s just a game that I want more people to know about. It’s very clever system with a great and unique theme, and I really enjoy it. I think the components look fantastic, and there’s a lot of detail in there. I’m looking forward to checking it out when released. If you’re interested in learning more, here’s the link to the Kickstarter campaign. That’s it for today. Stay safe out there, and thanks for reading!
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488550571.96/warc/CC-MAIN-20210624015641-20210624045641-00319.warc.gz
CC-MAIN-2021-25
5,138
16
https://www.poftut.com/what-is-static-ip-address-compare-with-dynamic-ip-address-and-how-to-set-for-windows-and-linux/
code
IP addresses are the core mechanism of Computer networks. The IP address is used to specify the host to communicate and transmit data. There are two types of IP address configuration mechanism. Dynamic and Static. In this tutorial, we will examine the static IP address configuration. Static IP Address Static means stable or not changeable, fixed, etc. Static IP addresses set to the host or interface and do not change during time. An IP address generally set manually. Every IP address can be used as static or dynamic there is no restriction. The setting or configuration type makes it static or dynamic. Dynamic IP Address The dynamic IP address is set with the DHCP protocol. DHCP means Dynamic Host Control Protocol and as its name suggest it is used to set network related configuration dynamically. Static IP Address Use Cases We can use static IP addresses for different cases. - Web Server’s must use static IP address because - Mail Server - Web Services - Network Devices - Security Devices Private Static IP Address (Home or Intranet) IP addresses categorized as Private IP Address or Public IP Address. Private IP addresses only used in internal networks like the intranet, home network, ISP network, etc. Private IP addresses can not be used on the internet. We can use all private IP addressed as static. Home routers generally use 192.168.1.1 as the static IP address for the home network. Example Private Static IP Addresses: Public Static IP Address (Internet) Public IP addresses can be used as static too. As stated previously these IP addresses can be routed over the internet. We can set public IP Addresses as static as Private IP addresses. Especially web servers, DNS servers, applications servers will have the static IP address. Here are some of the well know Public Static IP Addresses: Set Static IP Address For Linux We can set a static IP address for Linux in different ways. In this part, we will set a static IP address for Linux from the command line. Ubuntu, Debian, Mint, Kali First, we will open the interfaces configuration file which holds the network interface configuration. $ sudo nano /etc/network/interfaces Then we will set iface ens33 as static and provide information likemIP address, netmask, network, gateway etc. auto ens33 iface ens33 inet static address 192.168.1.10 netmask 255.255.255.0 network 192.168.1.0 broadcst 192.168.0.255 gateway 192.168.1.1 dns-nameservers 126.96.36.199 Fedora, CentOS, RedHat rpm based distributions like Fedora, CentOS, RedHat network configuration is stored in network-scripts directory with the name of the network interface. In this example, we will set a static IP address for $ nano /etc/sysconfig/network-scripts/ifcfg-em1 We will set information like IPADDR, NETMASK, BROADCAST, GATEWAY etc. UUID="e88f1292-1f87-4576-97aa-bb8b2be34bd3" NM_CONTROLLED="yes" HWADDR="D8:D3:85:AE:DD:4C" BOOTPROTO="static" DEVICE="em1" ONBOOT="yes" IPADDR=192.168.1.2 NETMASK=255.255.255.0 BROADCAST=192.168.1.255 NETWORK=192.168.1.0 GATEWAY=192.168.1.1 Set Static IP Address For Windows In Windows operating systems we will use standard network configuration to set the static IP address. Open the following screens. Network Connections -> Double click LAN or Internet Connection-> Click Use the following IP address
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648850.88/warc/CC-MAIN-20230602172755-20230602202755-00564.warc.gz
CC-MAIN-2023-23
3,289
37
https://forums.opensuse.org/t/msi-gx600-install-32-or-64-bits/8955
code
IMO, unless you are doing programming or have more than 3 GB of RAM, you probably won’t notice any difference between 32 and 64 bit. Some stuff doesn’t work without some fiddling in 64 bit, and since you are using 2 GB of RAM, I would recommend using the 32 bit. If you decide to use the 64 bit, be aware that you may run into problems using the java and flash plugins for your browser. They are fixable (search the forums or google for 64 bit flash OR java), but may not work “out of the box”. You could install both on separate partitions if you have some extra hard drive space. 30 gigs (15 gigs for 32 bit and 15 for 64 bit) should be plenty. The partitioning can be a little tricky, but it’s a fun project. I would install 32 bit first, and get that working perfectly (be sure to save some space when you partition) before installing the 64 bit version. The reason for getting 32 bit working real good is that you generally lose both OS’s if you have to reinstall the one lower down on the totem pole. They can share the same swap file, so keep that in mind when partitioning. Why on earth people install 32 bit distros for 64 bit architectures.Ive never,never understood that. There is absolutelly nothing that is not functional under 64 and works under 32. Flash plugin works fine.Just fine. I think this is clearly a M$ windows way of thinking “32 bit for everything,no matter what…” I found previous 64 bit versions of Open Suse to be noticeably buggier than the the 32 bit versions. This latest one is the best I have seen so far, but people are still reporting some problems. Also, not all programs are available in 64 bit, and I have used both versions but notice no performance increase at all by going to 64 bit. That said, I’m going to re-install it today on another partition, but plan to use and compare the 32 and 64 bit versions side by side and see which one I like better. I have installed 64 bit three or four times already, but it never seems to last very long on my system before running into some major problem and I just ditch it and go back to 32 bit.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474744.31/warc/CC-MAIN-20240228175828-20240228205828-00129.warc.gz
CC-MAIN-2024-10
2,098
11
http://jenniferhuber.blogspot.com/2012/03/how-to-get-cisco-mse-virtual-machine-up.html
code
It took me a bit of time to get the new Cisco MSE VM up and running on my ESXi 5.0 box. I used the vSphere client from an XP64 vm to deploy the OVF template according to the video instructions posed by Cisco on YouTube. But when the VM import was completed, I couldn't start the VM because the MSE OVA file is configured with 8 vCPUs. The error I received when trying to startup the MSE VM was indicating that I needed to run the command esxcfg-advcfg -s 1 /Cpu/AllowWideVsmp at the CLI of the ESXi server. I had to do some searching to figure out how to get to the CLI of theESXi box. The solution I found was written up ages ago by Rick Vanover. I also found out the hard way that you can't have two vSphere clients managing the same ESXi box. If you find yourself facing the error message Call "PropertyCollector.RetrieveContents" for object "ha-property-collector" on "[IP ADDRESS] ESXi failed. that's what's going on there. Of course, I also forgot what the default username/password combination is for an MSE VM. For the record, the default user ID is root and the default password is password.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119356.19/warc/CC-MAIN-20170423031159-00151-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,100
5
https://jimbobbennett.dev
code
Hi, I'm Jim. I do developer advocacy Developer Advocate. Award winning YouTuber. I am a Regional Cloud Advocate focusing on building out and skilling communities in the Pacific North West, with a focus on the Microsoft Reactor in Redmond, Washington, as well as a passion for the internet of things, edge computing and TinyML. I’m British, so I sound way smarter than I actually am. In the past I’ve lived in 4 continents working as a developer in the mobile, desktop, and scientific space. I’ve spoken at conferences and events all around the globe, organised meetup groups and communities, and written a book on mobile development. I also hate and am allergic to cats, but I have a 9-year-old daughter who loves cats, so I have 2 cats.Here are a few technologies I've been working with recently: Find me on the internet
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652959.43/warc/CC-MAIN-20230606150510-20230606180510-00498.warc.gz
CC-MAIN-2023-23
827
7
https://onlinecourses.science.psu.edu/stat503/node/72
code
14.4 - The Split-Split-Plot Design The restriction on randomization mentioned in the split-plot designs can be extended to more than one factor. For the case where the restriction is on two factors the resulting design is called a split-split-plot design. These designs usually have three different sizes or types of experimental units. Example 14.4 of the text book (Design and Analysis of Experiments, Douglas C. Montgomery, 7th and 8th Edition) discusses an experiment in which a researcher is interested in studying the effect of technicians, dosage strength and wall thickness of the capsule on absorption time of a particular type of antibiotic. There are three technicians, three dosage strengths and four capsule wall thicknesses resulting in 36 observations per replicate and the experimenter wants to perform four replicates on different days. To do so, first, technicians are randomly assigned to units of antibiotics which are the whole plots. Next, the three dosage strengths are randomly assigned to split-plots. Finally, for each dosage strength, the capsules are created with different wall thicknesses, which is the split-split factor and then tested in random order. First notice the restrictions that exist on randomization. Here, we can not simply randomize the 36 runs in a single block (or replicate) because we have our first hard to change factor, named Technician. Furthermore, even after selecting a level for this hard to change factor (say technician 2) we can not randomize the 12 runs under this technician because we have another hard to change factor, named dosage strength. After we select a random level for this second factor, say dosage strength of level 3, we can then randomize the four runs under these two combinations of two factors and randomly run the experiments for different wall thicknesses as our third factor. The linear statistical model for the split-split-plot design would be: Using the Expected Mean Square approach mentioned earlier for split-plot designs, we can proceed and analyze the split-split-plot designs, as well. Based on Expected Mean Squares given in Table 14.25 to build test statistics (assuming the block factor to be random and the other factors to be fixed), , and are whole plot, split-plot and split-split-plot errors, respectively. Minitab handles this model exactly in this way by GLM. (This was Table 14.22 in the 7th edition. The 8th edition has only the factors and EMS without the list of subscripts.) Table 14.25 (Design and Analysis of Experiments, Douglas C. Montgomery, 8th Edition) However, we can use the traditional split-plot approach and extend it to the case of split-split-plot designs as well. Keep in mind, as mentioned earlier, we should pool all the interaction terms with the block factor into the error term used to test for significance of the effects, in each section of the design, separately.
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00605-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
2,893
8
https://developer.usoft.com/documentation/80doc/source/extfile/webdesigner/htm_howtoembedapageinanotherpage.htm
code
How to Embed a Page in Another Page You can embed a page in another page simply by dragging the page to be embedded from the Web Designer catalog in the target page, or in one of its group objects. If the page to be embedded is a related page or lookup page, synchronization between parent and child data is automatically applied. With only a few layout changes necessary, this is a very easy way to create a master-detail page. To embed a page in another page: This mainly involves layout changes, like deleting group objects and buttons that already exist in the target page, for example the BottomButtonGroup. An additional page appears in the object tree.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100800.25/warc/CC-MAIN-20231209040008-20231209070008-00017.warc.gz
CC-MAIN-2023-50
659
7
https://supportforums.cisco.com/discussion/10067361/sh-access-list-output-check
code
I can't find anything on CCO or Google to explain the 'check==378' in the output below. router#sh access-list 1 Standard IP access list 1 permit 10.25.0.0, wildcard bits 0.0.0.255 check=378 This is just a snippet. Other ACLs on the same router do not have the 'check' field - just this one, which is an access-class ACL on the vty. See config below: access-list 1 permit 10.25.0.0 0.0.0.255 line vty 0 4 access-class 1 in Also, Cisco Output Interpreter just chokes on this output. The router is a 1721 running 12.2(8)T5. Please advise. - Jonathan
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426234.82/warc/CC-MAIN-20170726162158-20170726182158-00542.warc.gz
CC-MAIN-2017-30
546
10
https://discuss.elastic.co/t/accidentally-deleted-all-indices-from-the-cluster-how-do-i-restore-the-data/148806
code
I know we messed up big time by accidentally deleting all the indices from the cluster but I needed input regarding what is the proper procedure to restore it back. While running the curator tool, accidentally gave the wrong action file thus resulting in all the indices being deleted from the cluster. This cluster had all x-pack settings configured, ssl settings configured, ldap and so on. The issue now is that, though I do have a snapshot of the recent data present in one of the folders of the server (This path is not registered as a repository yet, though it contains the snapshot), the issue is in starting elasticsearch service again because the .security index has also been deleted thus we get an x-pack exception. I have stopped the ES, kibana and logstash services for now. I just want to know the right sequence of restoring the data from the snapshot I have, I have not restored a snapshot manually before so It would be really helpful if someone could just guide me regarding this.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828018.77/warc/CC-MAIN-20181216234902-20181217020902-00343.warc.gz
CC-MAIN-2018-51
998
3
https://news.chinadefi.com/aave-dev-report/
code
Another week of development updates on Aave Protocol! One month has already passed since the release of Aave Protocol on the main Ethereum network, and its growth continues. Following what we mentioned on the Security Report two weeks ago, the goal of this periodic blog post will be updating the Aave community about what is happening within and around the protocol, from the perspective of Aave’s developers, and trying to be completely transparent. Since this scope covers a wider range of topics than just security (which of course will be one of the most important parts), the name of the blog post has been changed from “Security Report” to “Dev Report”. The protocol grows day by day, and the numbers are beginning to look quite noteworthy! For a quick overview: - The Market Size of the protocol is 17.5 Million USD - … of with Total Value Locked (TVL) of 13.5 Million USD - …and 4 Million in borrowed assets. - LINK (6 Million USD), ETH (3.7 Million USD) and LEND (1.7 Million USD) are the assets with highest amount locked. - ETH (~1.1 Million USD), DAI (~700 000 USD) and SNX (~540 000 USD) are the most borrowed assets. It makes us especially proud to see how the rhythm of integrations is accelerating week-by-week and how more and more fellow developers contact us with amazing ideas which can be plugged to Aave protocol. This last two week, the protocol and/or its aTokens have been integrated on Iearn.finance, Sablier, Totle, Nexus.mutual and Trust Wallet among others. The audit process for the first version of the governance framework -which will have control over Aave Protocol- has finished with good results. During the following weeks, the code will be released publicly along with an article explaining it in detail. At the same time, we will start an open bug bounty where everybody will be welcome to try to find any potential bugs in the code base. As soon as the governance tools are ready to provide the best user interaction, the ownership of Aave Protocol will be moved to the core contract of the governance framework, and all the contract updates of the protocol will need to be done by passing governance proposals. As extensively explained here, the depositors’ yield has been moved down from 0.35% to 0.09%. From a technical perspective, this leaded to the deployment of a new LendingPoolParametersProvider to include the updated parameter. During the last two weeks, no security incident on the smart contracts of the protocol has appeared. Regarding the Aave Protocol subgraph on TheGraph and our user interface, we found and fixed the following issues: - Issue #1: Inconsistency on useAsCollateral field on depositor data. When a depositor with a deposited balance redeems everything, on the smart contracts layer, the field useAsCollateral is set to false by design. The Aave Protocol subgraph was not implementing this logic causing the off-chain data of the user to be incorrect and inconsistent with the on-chain data (correct). We fixed this issue on our subgraph to have full consistency on this part of the logic. - Issue #2: Lack of precision on Health Factor calculation. Due to some off-chain calculations, the calculation of the user Health Factor on our interface was lacking some precision. The function performing the calculation was fixed. - Issue #3: Incorrect calculation of the maximum amount to redeem. The maximum liquidity available in the pool was to taken into account to get the maximum amount to redeem for a deposit. The validation was changed in order to be consistent with the on-chain validations. This first month of the protocol has been really exciting and we are sure it will continue this way! As always, if you want to be up to date with the latest news related to Aave, please join our telegram and discord channel, and follow us on twitter and facebook. Don’t miss the next Dev report in two weeks!
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400222515.48/warc/CC-MAIN-20200925053037-20200925083037-00145.warc.gz
CC-MAIN-2020-40
3,894
21
http://map.squeak.org/package/0b7ae674-e4bb-428b-8f5e-39eec85a02a2
code
broadcasting information to registered objects SUZUKI Tetsuya (tetsuya) - The official distribution of Squeak. Non official package - Just a package for Squeak, no community guarantees. :) - Class libraries for Squeak to use for development - The MIT license is like BSD without the advertising clause. As free as it gets, suitable for cross Smalltalk 100% reuse. The recommended license for Squeak since the 4.0 release. NotificationCenter provides a mechanism for broadcasting information to registered objects. - first release
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649293.44/warc/CC-MAIN-20230603133129-20230603163129-00614.warc.gz
CC-MAIN-2023-23
529
9
https://forum.bubble.io/t/bit-ly-plugin-for-short-url-and-stats/47873
code
It would be great to have a Bit.ly plugin that allows the creation of short URLs and the ability to get the stats back from Bit.ly. Agreed, would be nice. As time permits, I may give a closer look to building a plugin. If you’re interested in building it directly, I created a fairly in-depth tutorial covering how to build a link shortener within Bubble. Thanks! I’ll give it a look and see if I have the skills to pull it off. @dan1 thanks for the in-depth video. Great to learn. I’m trying to find a way to shorten bubble URLs to be sent inside of a text message. So I want to start with a long bubble URL and turn it into a shortlink and when that shortlink is clicked, take the user to the original bubble URL. Any suggestions on how to do this?
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474594.56/warc/CC-MAIN-20240225071740-20240225101740-00183.warc.gz
CC-MAIN-2024-10
756
6
https://github.com/ansible/ansible/issues/7852
code
Join GitHub today Global ignore_errors #7852 My playbooks take quite a while to run (a couple minutes for no-change runs) and if I add 10 tasks while developing and they have errors, I need to re-run the playbook over and over to fix each iterative issue, making a small change take possibly a very long time. It would be nice if I could set a configuration setting to continue on error globally so that I can have all the tasks run, fix the red ones, then vagrant destroy/up and ensure nothing is still red. Steps To Reproduce: While it's true what @mpdehaan stated, it would be a nice feature for the check mode. Consider an example where you have a large role or playbook and want to run your check until the end, so you can fix occuring errors in one go and not iterative. Also, you could see what would fail because of previous tasks failing. Ignore what I mentioned, I just saw there's something like that already. You can specify It is rather disappointing that it is not possible to set ignore_errors on the command-line. This renders the
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827097.43/warc/CC-MAIN-20181215200626-20181215222626-00124.warc.gz
CC-MAIN-2018-51
1,046
8
https://www.myswissalps.com/forum/topic/accommodation-in-interlaken
code
Feb 15, 2019 - 2:29 PM I am Anand Gupta travelling to Interlaken from milan with my family (total no. of Adults:4). I am looking for apartment or hotels which are near to interlaken station. To be honest with you i dont want to spend much in hotel as i will be exploring Switzerland all day. so i kindly request you to suggest me the budgetary hotel or apartment.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.57/warc/CC-MAIN-20210921161350-20210921191350-00374.warc.gz
CC-MAIN-2021-39
363
2
https://www.planetjoel.com/index.php/2023/04/30/chatgpt-and-auto-gpt/
code
Like everyone it seems I have been playing with ChatGPT and Auto-GPT trying to understand their capabilities, strengths and weaknesses and get a feel for where the technology is heading. Here are my observations thus far. GPT-3, GPT-3.5 Turbo, GPT-4 and Bard I saw a big jump in response depth, comprehension and usability between GPT-3 to GPT-3.5. In most of my use-cases the differences between 3.5 and 4 have been largely incremental but that is probably more a reflection of how I’m using it than a true reflection of the power of the tools. It seems GPT-4 is necessary when you are wanting a significant analysis – lots of tokens both input and output – or when the subject matter depth is high. Generally speaking GPT-3.5 Turbo seems great for most use-cases and I love the speed. I have access to Bard but I haven’t had enough time with it yet to draw any conclusions. Bing Search powered by AI For a long time I have preferred to read web content via Pocket or via similar services that convert the web to pure text for a few reasons – it kills all the unnecessary clutter, it’s so much easier and comfortable to read and it’s much more efficient. Bing Chat is a level above this by reading it for you and summarising. For most of my day to day web searches I am finding Bing Chat is the best option especially if you want a quick answer to a question. There are still a few weird quirks: - I asked a simple “how many days until this date” question and it got it wrong - It sometimes just states terrible web information as fact - Even though they are pushing it for things like recipes I found the recommendations kind of lame and there was limited way to say “I don’t like these” Auto-GPT is an experiment that sits on-top of GPT and attempts to automate the prompt, read and interpret feedback loop. It can search the internet, execute code (in the latest version), run GPT agents and has both short and long-term memory. It’s both disappointing, inspirational, mind-blowing and weird all at the same time. Disappointingly so far I haven’t had much success at getting it to write a program – even a very simple one. I thought perhaps this was the purpose and what it would be best at. It started to do very strange things when I asked it to write an Alexa skill like googling “how do I write a python function”. For researching, it’s a very powerful and clever tool. I asked it to find books for my wife who wanted a new novel with a female protagonist, sci-fi but not military, that had an audiobook and was similar to “The Lost Tomb” series. With a few instructions I set Auto-GPT to work and asked it to summarise the plots of the top 5 books it found meeting these criteria. It took about 10 minutes and came back with a summary of books all meeting this criteria and she is working her way through them. Something we could have done ourselves perhaps but it’s absolutely incredible AI can do a fairly complex task like that for us. Watching it “think” is surreal as it thinks through every action and considers the pros and cons of what it is about to do. The introspection and consideration of risks or downsides of a course of action is the most impressive part of this technology. I can see it’s very much in the early days but it’s already useful and is basically at a level of a junior employee or assistant. I’m having incredible fun exploring and playing with what this AI can do and thinking about how it will impact my work and personal life. I firmly believe we are witnessing history in the release of this technology and the disruption, empowerment and change this tech will bring will be enormous. I’ll continue to provide updates as I go. Leave any comments on your own Generative AI experiences if you like.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474660.32/warc/CC-MAIN-20240226130305-20240226160305-00427.warc.gz
CC-MAIN-2024-10
3,792
15
http://petepirc.blogspot.com/2012/07/cbip-certification.html
code
A few months ago I enrolled in the CBIP (Certified Business Intelligence Professional) certification program administered by the TDWI institute. I thought I'd share my experience as it might be useful to someone else doing this certification. I have a finance background (MSc in Finance/ management and I am a CFA charterholder); however I enjoy various IT subjects and the data analysis in particular so I said why not giving the CBIP a go. I have already read a relatively large number of technical books on the business intelligence subject so I thought that the CBIP certification will give me at least a general guidance and a framework - in addition to being able to learn new things as well as putting nice 4 new letters on my CV :) So far the course study was very interesting and I can say I have learned a lot. I have sat the Data Warehousing core exam - and passed with 77%. This week (Friday) I'm taking the speciality Business Intelligence & Analytics exam, fingers crossed. I'm already dreading a bit the last mandatory exam- IS Core, since I don't have an IT background it might be a bit of a challenge from what I hear. But hey, a good challenge ahead! My strategy was and is to read books from the recommended reading list + some Wikipedia articles. Even though the exam guide book is expensive (cca. $100 if I recall well), it is very useful. It guides you through all of the exams with the topics outlines, gives you the list of the recommended readings and it has some sample exams as well. So my first advice would be - buy it. As I already mentioned, my strategy is mainly reading some of the recommended books.Here is the list of what I have read so far: For the Data Warehousing exam: - Kimball & Co: The Data Warehouse Lifecycle Toolkit : VERY good book on data warehousing and very useful for the exam. It is quite a heavy read, it is stated to have cca. 500 pages but on my Kindle DX it looked more like 1500 pages. It took me a while to get through it and I read it from cover to cover but it is definitely worth it. - Kimball & Co: The Kimball Group reader: this one is actually a formatted collection of articles that the Kimball group published over the years. I didn't read everything from it because some articles are a repetition from the above book - but it was still useful and I learned quite a few new things. - Larissa Moss & Co: Business Intelligence Roadmap: The Complete Project Lifecycle : I read some parts of this book for the DW exam and some parts for the Business Analytics one. There is some overlap with this book and the Kimball ones. However this book doesn't only cover the Data warehousing side of things but also other aspects of the BI program/ project. Consequently it is not as complete as the 2 above books but it gives you a good overview and summary. - I also read a few Wiki articles on subjects not covered by these books. Prior to enrolling on the CBIP I read a book by Roland Bauman: Pentaho Solutions: Business Intelligence and Data... I bought this mainly because of my work but found that it had several chapters that were very useful for the exam as well. For the Business Analytics exam: - David Loshin: Business Intelligence: The Savvy Manager's Guide: I think this was the most useful book from the ones I read for the Business Analytics exam in terms of a match towards the topics outlines. It was good overall; the only problem I had with it was that I had the impression that the writing style of the author was switching several times between what I would call an easy reading and then quite a sophisticated one. Or maybe I was just tired after work when I read some more technical parts of it :) - Thomas Davenport: Analytics at Work...: very good book. The author is a business analytics and management consultant and at least for me it was quite refreshing to read a book from a business perspective for a change. This one was the least technical from all books I have read so far but it was a very interesting read. And it was really funny at times as well. - Larissa Moss & Co: Business Intelligence Roadmap: The Complete Project Lifecycle: as already mentioned before - Carlo Vercellis: Business Intelligence: Data Mining...: this one was more of a stats and maths book and their application to business analytics. Despite the subject it was still quite readable. I haven't read everything (maybe 50% of the whole book) because some of the topics were more advances than what I needed. The book has 2 main parts - qualitative introduction to BI (data, warehousing...) and then statistical models and their application to BA problems. So there we go, that's all for now. As I said I'll be sitting the Business Analytics exam this Friday so I will see whether my strategy continues to be successful or not. You might say that I'm reading an unnecessary amount of materials for these exams. As I explained, I actually enjoy reading material on these subjects and my primary motivation is to learn while doing the exam, not just have the certification - although that will be nice :) I spent roughly 1.5 months preparing for each of the exams. You might also say that I'm overly positive in my review of the above books. Maybe but the reason might also partially be that I'm only buying books on Amazon (with my Kindle) that already have very good reviews. The only times I found I didn't really like books I bought on these topics was when I bought books that were actually not what I was looking for or were too technical. Till the next time!
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578759182.92/warc/CC-MAIN-20190426033614-20190426055614-00291.warc.gz
CC-MAIN-2019-18
5,522
19
https://www.cosmos.esa.int/web/gaia/gaia-people/francesca-de-angeli
code
Francesca De Angeli - Gaia Francesca De Angeli Institute of Astronomy Francesca De Angeli completed her PhD at the Astronomy Department of the University of Padova (Italy) at the end of 2004. During her PhD she worked on photometry and radial velocity data of Galactic globular clusters. She has been also developing a new technique to determine accurate distances and ages of globular clusters through axisymmetric modelling of their internal dynamics. Francesca is now a postdoctoral fellow at the Institute of Astronomy, University of Cambridge (UK). There she is collaborating with Floor van Leeuwen on the preparation of a software tool-box, aimed at providing basic algorithms for future shell and core tasks for the Gaia data analysis. She is also involved with the Simulation and Photometry working group activities. Gaia people archive
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250604397.40/warc/CC-MAIN-20200121132900-20200121161900-00350.warc.gz
CC-MAIN-2020-05
844
6
https://www.tutorialkart.com/swift-tutorial/swift-array-remove-all-elements/
code
Swift Array – Remove All Elements To remove all elements of an Array in Swift, call removeAll() method on the array. The syntax to call removeAll() method on the array is where parameter is optional, and hence if we do not provide an condition, all the elements in the array are deleted. In the following program, we will take an array fruits with five elements, and remove all of its elements using var fruits = ["apple", "banana", "cherry", "mango", "guava"] fruits.removeAll() print(fruits) All the elements have been removed from the array, and an empty array is printed in the output. In this Swift Tutorial, we learned how to remove all the elements from an array in Swift.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00365.warc.gz
CC-MAIN-2023-50
681
10
https://community.zeek.org/t/log-delays-and-logger-cpus/5866
code
I have a cluster running Bro 2.6.4. One host runs a manager and logger, 8 other hosts run proxy and worker nodes. Lately the logger node has not been able to keep up with the logs, and I’ve noticed that the most recent entries in the current/conn.log are significantly delayed (I’ve seen delays as high as 90 minutes). The logger process has maxed out CPU usage on core 1. The node.cfg file specifies 8 CPU cores (all on the same NUMA node as the NVMe drive where the logs are written): broctl nodes shows that only 1 CPU core is pinned: logger - addr=10.x.x.x aux_scripts= brobase= count=1 env_vars= ether= host=bromanager-01.umn.edu interface= lb_interfaces= lb_method= lb_procs= name=logger pin_cpus=1 test_mykey= type=logger zone_id= Can pin_cpus be used with a logger node? Any other suggestions for improving logger performance?
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571989.67/warc/CC-MAIN-20220813232744-20220814022744-00294.warc.gz
CC-MAIN-2022-33
838
6
https://www.transitdrivein.com/content/Retro+Tuesdays
code
Retro Tuesdays are such a meaningful part of each season at the Transit Drive-In Theatre. This year, Retro Tuesdays are celebrating their 10th anniversary at the Drive-In! Here is your Retro Tuesday Drive-In lineup for the 2022 season! May 31: The Breakfast Club + Pretty In Pink June 7: The Goonies + Gremlins June 14: American Graffiti + Caddyshack June 21: Twister + Austin Powers June 28: National Lampoon's Animal House + The Blues Brothers July 5: Willy Wonka and the Chocolate Factory + Wizard of Oz July 12: Back to the Future + Ghostbusters July 19: The Outsiders + Stand By Me July 26: Elf + Christmas Vacation August 2: Ferris Bueller's Day Off + Indiana Jones and the Raiders of the Lost Ark August 9: Harry Potter and the Sorcerer's Stone + Harry Potter and the Chamber of Secrets August 16: Monty Python and the Holy Grail + The Big Lebowski August 23: Grease + Dirty Dancing August 30: Fast Times at Ridgemont High + Dazed and Confused
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335190.45/warc/CC-MAIN-20220928082743-20220928112743-00066.warc.gz
CC-MAIN-2022-40
950
16
http://skillgun.com/question/1349/c/data-structures/what-is-the-best-case-time-complexity-of-a-binary-search-algorithm-best-case-means-given-item-is-in-middle-position
code
What is the best case time complexity of a binary search algorithm? (Best case means given item is in middle position) O(n log n) Since in the best case given element is available in middle node. Since in the first while loop only we will find that element is available in middle node. so total no of comparisons are only 1. so complexity is O(1). Back To Top
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864110.40/warc/CC-MAIN-20180621075105-20180621095105-00146.warc.gz
CC-MAIN-2018-26
359
4
https://www.waze.com/forum/viewtopic.php?f=11&t=27386
code
Thanks for the detailed description, now I'm pretty sure you run the same system like AU, getting shields only from primary streetnames following a predefined pattern. You'll have to define patterns for each possible shield, so currently it might be: State Hwy x State Hwy xx State Hwy xxx For getting a shield for 98a an additional pattern is required: State Hwy xxa PhantomSoul wrote:Hopefully Waze will expand its shield pattern recognition to include these in the near future. Well, Waze will wait for you to give them the patterns, they don't get active themselves on this. As I understand US community structure you run each state as Europe runs a country. For shields are a country function, you will have to coordinate a pattern-table US-wide. To show an oval sign for SR26 you need either recode the name to State Hwy 26 or submit a pattern SRxx. Guess you will have to decide which way to go not case by case but national.
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00013-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
932
10
http://trylinux.org/2004/05/
code
I recently got warcraft working on my Zaurus SL-860 here are the details: CACkO Zaurus Qtopia ROM 1.21 Not 1.21a libsdl_1.2.5-slzaurus20031201 older version, but seems to work stratagus_2.0.0-1_arm.ipk Thanks lucho! From ronba: You must use CD warcraft2 version DOS and wargus-2.0pre1.tar. In the tar there is a script build.bat for windows and build.sh for linux, this file will allow you to catch the data of CD of warcraft 2, the result of script is a directory data.wc2 , to copy this directory data.wc2 in SD or CF to /mnt/card/data.wc2. From a shell type the following: ln -sf /mnt/card/data.wc2 /opt/QtPalmtop/lib/stratagus Now, Launch Stratagus from the Application tab on your Zaurus and play!
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321553.70/warc/CC-MAIN-20170627203405-20170627223405-00268.warc.gz
CC-MAIN-2017-26
702
8
https://docs.provide.technology/vault/
code
State-of-the-art key management focusing on advanced privacy and messaging capabilities. The Vault service offers state-of-the-art key management with a focus on providing advanced privacy and messaging capabilities (i.e., zero-knowledge proofs, SNARK-friendly hash functions, double-ratchet algorithm, etc.) in a single enterprise-grade API. This section describes the elliptic curves and key specifications which are currently supported by the API. Supported curves and key specs are defined with a typeof either or symmetrickeys support key derivation (i.e., such as the ChaCha20stream cipher). Other key specs, such as RSA, are provided for convenience and to achieve table-stakes feature-parity with industry-standard key management solutions such as AWS Key Management Service, Azure Key Vault, Hashicorp Vault, etc.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506480.35/warc/CC-MAIN-20230923062631-20230923092631-00536.warc.gz
CC-MAIN-2023-40
822
6
https://xangle.zendesk.com/hc/en-us/articles/360038581872-Trigger-modes
code
Fire all of the cameras at the same time or do crazy complicated triggering patterns? You'll find everything in this document! The trigger modes can be changed from the dashboard ("Trigger Mode" button) This is as basic as it gets. This is going to try to fire all of the cameras at the same time. We do send a signal with a < 1ms accuracy to the cameras (all by usb), but your cameras may take longer to process the signal. In any case, using strobes is the ultimate way of having a perfectly frozen shot. See these two articles for more details: Using constant light and Using strobes. Trigger all cameras for a specified number of iterations, with or without an intervalometer Trigger all cameras with a specific delay between each (to create movement). Depending on the speed of your subject and the accuracy of your cameras, you can go as fast as 1ms. Example: https://www.instagram.com/p/Bs6pSdRhlL4/ The seamless option is creating an illusion of an endless loop on full 360-degree structures. Two cameras are always triggered at the same time, meaning that you're going to see the same action from both sides. Example: https://www.instagram.com/p/Byf5AxfhlO5/ 4- Jump and Freeze In the Jump-and-freeze mode, you can split the triggering into two stages to make it feels like you're slowing down the time right after the action takes off. In the example below, the transition is going to happen halfway (camera #6 on a 12 units installation). The first 6 cameras are going to be triggered in a 20ms interval (regular action speed), then cameras 7 to 12 are going to be nearly frozen (1ms interval)
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894759.37/warc/CC-MAIN-20201027195832-20201027225832-00255.warc.gz
CC-MAIN-2020-45
1,604
8
https://www.tr.freelancer.com/projects/verilog-vhdl/vhdl-verilog-translation/
code
I have .vhdl files for an implementation of google chrome's 'dino run' which appears when the user has no wifi connection. However, I would like to have the same functionality with Verilog description language. Bu iş için 8 freelancer ortalamada $125 teklif veriyor I am really happy to help you out of this project. I would like to introduce that I am an freelancer with 100% JOB COMPLETED in VHDL/VERILOG. Relevant Skills and Experience VHDL/Verilog/FPGA Proposed Milestones $100 Daha Fazla Hi. I'm experienced in both verilog and vhdl. let me know if you need help in your project Relevant Skills and Experience vhdl,verilog,digital design etc Proposed Milestones $166 USD - coding i am a embedded hardware and software expert and have rich experience with verilog/vhdl . i designed pcbs and developed firmware for it , manufactured prototype directly. Relevant Skills and Experience you can check Daha Fazla I can complete this project within a stipulated time.I have verilog design experience. currently working as RTL design engineer and freelancer.
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515029.82/warc/CC-MAIN-20181022092330-20181022113830-00431.warc.gz
CC-MAIN-2018-43
1,057
6
https://bz.apache.org/SpamAssassin/show_bug.cgi?format=multiple&id=2167
code
|Summary:||bayes: save unmunged tokens during scan, for later learning| |Product:||Spamassassin||Reporter:||Kelsey Cummings <kgc>| |Component:||Libraries||Assignee:||SpamAssassin Developer Mailing List <dev>| patches for new sonic.net features example backend server Description Kelsey Cummings 2003-06-30 17:05:45 UTC Here is the first run at a group of hacks to increase sa and bayes usefulness in a large scale environment where users either don't have access to shell accounts or aren't smart enough to use them. These patches enable a few things: hashing (leafing really, but is a module) of bayes statedirs seperately from userconfiguration dirs (or SQL) logging of bayes tokens per messages into unique files in a folder off of the statedir for later use to 'train' bayes via an external method. (We've written a server that enables users to vew their last N hours of email and to select displayed messages as spam or ham.) sa-btoc-learn takes care of the backend work of importing the tokens as spam or ham into the bayes databases. configuration of these new features is accomplished through local.cf and there are a couple of new modules as well. This code has not be thouroughly tested yet, but we wanted to get some other people looking at it. We'll probably have some updates in a few days when the rest of our new hardware arrives and we are able to accomplish some large scale tests. External tools need to handle removal of old 'bayes token logs' like a cron job, etc. Comment 1 Kelsey Cummings 2003-06-30 17:07:01 UTC Created attachment 1116 [details] patches for new sonic.net features Comment 2 Kelsey Cummings 2003-07-03 12:23:05 UTC Created attachment 1129 [details] revised patch fixed patches (I think.) Comment 3 Kelsey Cummings 2003-07-03 12:28:53 UTC Created attachment 1130 [details] example backend server Example server code to hook into via web app or other front end for user to access the learn_from_token_log pre-tokenized messages. Comment 4 Justin Mason 2003-08-28 16:35:48 UTC kgc -- looks interesting as I mentioned a while back. a few comments: 1. it's a bit untidy and needs cleaning up; there's config settings in the local.cf example, a shortage of documentation ;), the patch includes the "Makefile" (patching using "cvs diff" is better as it ignores generated files). 2. I think it'd be better if instead of dumping some metadata and the tokens as newline-separated data in the storage files, it could use a cleaner parseable format -- such as YAML or a custom one -- so that the format is extensible. I can imagine there may be situations further down the line where we want to add other kinds of data from the message -- full message text, more metadata, etc. If YAML sounds like overkill, a simple "Name: value" header-style thing would work fine here (just list all the tokens on one long line ;) A version line at the top of the file would help keep it forward-compatible, too, in case we need to make serious changes in future. 3. Disk space usage may be an issue -- as I mentioned, the tokens from a mail are often about the same size as the mail itself. Perhaps a good approach would be to store the tokens gzipped - but then we have to consider how to safely store binary data. (If the store can use 1 index file containing the metadata and then subfiles for the gzipped data, that'd work fine.) I still haven't got Dan to comment on it, but I think it makes sense ;) Comment 5 Duncan Findlay 2004-12-01 17:44:25 UTC Is this really an issue now that we have all this SQL stuff? I'm going to assume it's not given that there's been nothing for 17 months... Closing WONTFIX, reopen if necessary. Comment 6 Justin Mason 2004-12-01 23:13:46 UTC yeah - Kelsey, feel free to pipe up with your opinions -- I'm wondering if it might be more appropriate to have some kind of support for this, stored in an SQL db for example. probably best to take that to a thread in dev@ rather than on this bug. Comment 7 Kelsey Cummings 2004-12-01 23:18:01 UTC Well, even with the SQL stuff as is it doesn't really address the issues that we were trying to deal with with these patches -- the ability for the spamd servers to track message tokens and learn on them through a web interface at a later date. But - the projects pretty much dead on this end since we still haven't been able to address the SQL performance to do site wide per-user bayes. We'll bring it up again if and when we start to work on it. Comment 8 Michael Parker 2004-12-01 23:30:42 UTC Subject: Re: enhancements to bayes, statedir, new sa-btoc-learn script Learning via spamd would probably be a huge win here, even without the BayesSQL stuff in place. Actually I can think of several solutions using the existing API, or soon to come API, that would work for what you are trying to do. Michael Comment 9 Justin Mason 2005-02-15 11:32:53 UTC I've just heard that apparently DSpam does something similar to this -- it dumps the list of tokens to the SQL database for every message, adds a signature header to the filtered mails, and relearning is then just a matter of extracting the signature from the (possibly mangled) message and extracting the token list from the db that matches that sig. This may be useful functionality, since it cleans up one aspect that's quite tricky in many environments -- it's no longer necessary for the user to know how to safely transmit the message they want to learn, in an unmunged format. the message can be thoroughly munged, as long as the Signature header is intact (or just relatively intact). That's possibly the nastiest UI issue with the whole Bayes training thing. anyone think this sounds useful? (reopening just so the idea is tracked.) Comment 10 Daniel Quinlan 2005-03-30 01:08:32 UTC move bug to Future milestone (previously set to Future -- I hope)
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363189.92/warc/CC-MAIN-20211205130619-20211205160619-00408.warc.gz
CC-MAIN-2021-49
5,802
27
http://blog.ideafarms.com/tag/testimonial/
code
During our recent session on Design Thinking (and Design Doing) at Makersbox, our goal was to bust myths of Design Thinking that have been perpetuated in the market. And the underlying theme for the session was: Design Thinking is not design Gagandeep Singh Sapra, Founder of MakersBox and SproutBox summarises the session for us: When you hear the word Design Thinking, your mind hears Design and you talk about design and you think that it only a designer’s job; while had that been a different word, you would have thought differently – the meanings that I attached to it would not have happened.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514495.52/warc/CC-MAIN-20210118092350-20210118122350-00726.warc.gz
CC-MAIN-2021-04
603
4
https://forum.sierrawireless.com/t/bx310x-at-command-for-hostname-lookup-request-dns-client/21224
code
I am trying to use the BX310x Dev kit for IOT application. Going through the command set provided in the AT command reference manual, I am not able to find the DNS client command for Hostname lookup. Is the command missing in the AT reference manual?. Or is this not available as part of the this Firmware? .Please let us know The Firmware version is Sierra Wireless Copyright 2018 Here is the AT command snapshot from the module.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198287.23/warc/CC-MAIN-20200920161009-20200920191009-00261.warc.gz
CC-MAIN-2020-40
430
5
https://fieldnotes.alltrails.com/blog/2019/06/07/real-time-overlays-are-now-even-more-awesome/?utm_source=rss&utm_medium=rss&utm_campaign=real-time-overlays-are-now-even-more-awesome
code
Real-Time Overlays are Now Even More Awesome One of my favorite AllTrails Pro features is our real-time overlays. What’s a real-time overlay? Great question! It’s a way to see more useful info on any given map. We’ve historically had 4 real-time overlays available for Pro users: Visualizes all of the user recordings tied to any trail. Easily find the most popular routes or less-traveled paths that other members of the community have taken. Displays recent fire activity, including year and incident name. Color coded by recency with red indicating the most recent and yellow being the oldest. Displays real-time satellite and radar weather data over any map layer. Know what to expect before you hit the trail and make sure you’re prepared. Displays a real-time air quality index over any map layer. Color coded with green indicating the cleanest air and red indicating the most polluted. Super useful for people with asthma or those trying to escape urban congestion. Today we’re announcing two all new real-time overlays – Pollen and Light Pollution. Displays a color coded overlay of excessive nighttime artificial light sources worldwide over any map layer. Areas displayed in red show the most light polluted areas while green shows the least polluted. This layer give you insight into whether or not you’ll be able to see stars at night. Displays a daily forecast of tree pollen levels over any map layer in the U.S. and Europe. 10 different pollen species are included. The overlay is color coded with green indicating the lowest concentration of pollen and red indicating the highest concentration. Great for people with seasonal allergies. We’re kicking around a few ideas for more real-time overlays, so stay tuned! And if you have any suggestions or feedback, email us anytime – firstname.lastname@example.org. We’d love to hear from you Get in touch with any questions, comments or concerns at email@example.com
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655894904.17/warc/CC-MAIN-20200707173839-20200707203839-00286.warc.gz
CC-MAIN-2020-29
1,947
12
http://www.aspmessageboard.com/showthread.php?94599-Seemingly-simple-problem!
code
Hope somebody can help with what seems to be a simple problem! <BR><BR>I have a loop which runs writes table rows. Once finished it closes the table and then at the bottom I want to put a horizontal rule. However, when I try this the horizontal rule appears above the table! I've tried different methods but none seem to work, any ideas?
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717783.68/warc/CC-MAIN-20161020183837-00066-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
337
1
https://jeffpar.github.io/kbarchive/kb/043/Q43092/
code
Q43092: Underscore (_) Not OK in Variable Name or Line Continuation Article: Q43092 Product(s): See article Version(s): 4.00 4.00b 4.50 Operating System(s): MS-DOS Keyword(s): ENDUSER | SR# S890327-20 B_BasicCom | mspl13_basic Last Modified: 15-DEC-1989 The underscore character (_) is not allowed within a QuickBASIC variable, SUBprogram, or FUNCTION name in all versions of QuickBASIC. Also, you cannot type an underscore as a line continuation character within the QB.EXE environment of QuickBASIC Versions 4.00, 4.00b, and 4.50, the QB.EXE environment in Microsoft BASIC Compiler Versions 6.00 and 6.00b, or the QBX.EXE environment in Microsoft BASIC PDS Version 7.00. If an editor other than QB.EXE / QBX.EXE is used to write your program, the underscore character is recognized as a valid line continuation character by the BC.EXE compiler (but still cannot be used within variable, SUBprogram, or FUNCTION names). QB.EXE / QBX.EXE can load text files that use underscores for line continuation, but the underscore is stripped out and the continued lines are concatenated. Total concatenated line length is limited to 255 characters in QB.EXE / QBX.EXE and BC.EXE. This information applies to QuickBASIC 4.00, 4.00b, and 4.50, the QB.EXE environment included with the BASIC compiler 6.00 and 6.00b, and the QBX.EXE environment of BASIC PDS 7.00. QuickBASIC Versions 3.00 and earlier allowed you to continue lines with an underscore (_) character. However, in Versions 4.00 and later, the underscore character is not allowed, due to conflicts with the threaded p-code used within the environment and with the ability to perform interlanguage calling, especially with Microsoft C. If you create your program within another editor using the underscore as a line continuation character, and then attempt to load the program into the QuickBASIC environment Version 4.00 or later (or the QBX.EXE environment of BASIC PDS 7.00), the underscores used as line continuation characters are removed and the fragments of lines separated by the underscore are concatenated into one line. For example, write the following program using an editor other than QB.EXE 4.00 or later (or QBX.EXE): A = _ 5 * _ B END If the above program is loaded into the Version 4.00 or later QB.EXE environment or the QBX.EXE of BASIC PDS 7.00, it will be converted to the following: A = 5 * B END THE INFORMATION PROVIDED IN THE MICROSOFT KNOWLEDGE BASE IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND. MICROSOFT DISCLAIMS ALL WARRANTIES, EITHER EXPRESS OR IMPLIED, INCLUDING THE WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL MICROSOFT CORPORATION OR ITS SUPPLIERS BE LIABLE FOR ANY DAMAGES WHATSOEVER INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL, LOSS OF BUSINESS PROFITS OR SPECIAL DAMAGES, EVEN IF MICROSOFT CORPORATION OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. SOME STATES DO NOT ALLOW THE EXCLUSION OR LIMITATION OF LIABILITY FOR CONSEQUENTIAL OR INCIDENTAL DAMAGES SO THE FOREGOING LIMITATION MAY NOT APPLY. Copyright Microsoft Corporation 1986-2002.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362619.23/warc/CC-MAIN-20211203091120-20211203121120-00504.warc.gz
CC-MAIN-2021-49
3,107
4
https://community.apachefriends.org/f/viewtopic.php?f=16&t=33687
code
This is my first post, I have been able to solve my other problems by reading previous posts, until now. I have a simple html form on a web project, for inserting info into a mysql database with php. The connection works perfectly and the whole thing works on my website, with its database, it inserts fine. For some reason though in xampp it won't insert into the database. Instead it downloads the information in the browser as a php file, and it appears to be a copy of the php file used to create the insert, no matter which browser I use. I have no idea what is causing this. Also on my website ( which is set up to match in every way with xampp for easy transfer) the form works perfectly with no problems. I work offline sometimes testing new queries and such so I would appreciate any help you can provide.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121355.9/warc/CC-MAIN-20170423031201-00316-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
814
1
https://h5p.org/node/693547
code
H5P Interactive Video - free tutorial movies I've recently published a small set of movies explaining how to create a H5P Interactive Video. It's 15-20 minutes, split into 3 separate movies, so you can learn in small chunks :-) I've just made these available for public viewing on our YouTube Channel, in a H5P Paylist, so feel free to use them ! You can use them to learn how to create the H5P Interactive Videos yourself, or use them for staff trainnig purposes if you wish?
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510730.6/warc/CC-MAIN-20230930213821-20231001003821-00067.warc.gz
CC-MAIN-2023-40
476
5
http://stackoverflow.com/questions/15418008/path-from-s-to-e-in-a-weighted-dag-graph-with-limitations
code
Whatever you end up doing, do a BFS/DFS starting from s first to see if e can even be reached; this only takes you O(n+m) so it won't add to the complexity of the problem (since you need to look at all vertices and edges anyway). Also, delete all edges with weight 0 before you do anything else since those never fulfill your second criterion. EDIT: I figured out an algorithm; it's polynomial, depending on the size of your graphs it may still not be sufficiently efficient though. See the edit further down. Now for some complexity. The first thing to think about here is an upper bound on how many paths we can actually have, so depending on the choice of d and the weights of the edges, we also have an upper bound on the complexity of any potential algorithm. How many edges can there be in a DAG? The answer is n(n-1)/2, which is a tight bound: take n vertices, order them from 1 to n; for two vertices j, add an edge i->j to the graph iff i<j. This sums to a total of n(n-1)/2, since this way, for every pair of vertices, there is exactly one directed edge between them, meaning we have as many edges in the graph as we would have in a complete undirected graph over How many paths can there be from one vertex to another in the graph described above? The answer is 2n-2. Proof by induction: Take the graph over 2 vertices as described above; there is 1 = 20 = 22-2 path from vertex 1 to vertex 2: (1->2). Induction step: assuming there are 2n-2 paths from the vertex with number 1 of an n vertex graph as described above to the vertex with number n, increment the number of each vertex and add a new vertex 1 along with the required n edges. It has its own edge to the vertex now labeled n+1. Additionally, it has 2i-2 paths to that vertex for every i in [2;n] (it has all the paths the other vertices have to the vertex n+1 collectively, each "prefixed" with the edge 1->i). This gives us 1 + Σnk=2 (2k-2) = 1 + Σn-2k=0 (2k-2) = 1 + (2n-1 - 1) = 2n-1 = 2(n+1)-2. So we see that there are DAGs that have 2n-2 distinct paths between some pairs of their vertices; this is a bit of a bleak outlook, since depending on weights and your choice of d, you may have to consider them all. This in itself doesn't mean we can't choose some form of optimum (which is what you're looking for) efficiently though. EDIT: Ok so here goes what I would do: - Delete all edges with weight 0 (and smaller, but you ruled that out), since they can never fulfill your second criterion. - Do a topological sort of the graph; in the following, let's only consider the part of the topological sorting of the graph from s to e, let's call that the integer interval [s;e]. Delete everything from the graph that isn't strictly in that interval, meaning all vertices outside of it along with the incident edges. During the topSort, you'll also be able to see whether there is a path from s to e, so you'll know whether there are any paths s-...->e. Complexity of this part is O(n+m). - Now the actual algorithm: - traverse the vertices of [s;e] in the order imposed by the topological - for every vertex v, store a two-dimensional array of information; let's call it prev since it's gonna store information about the predecessors of a node on the paths leading towards it - in prev[i][j], store how long the total path of length (counted in vertices) i is as a sum of the edge weights, if j is the predecessor of the current vertex on that path. For example, pres+1[s] would have the weight of the edge s->s+1 in it, while all other entries in pres+1 would be 0/undefined. - when calculating the array for a new vertex v, all we have to do is check its incoming edges and iterate over the arrays for the start vertices of those edges. For example, let's say vertex v has an incoming edge from vertex w, having weight c. Consider what the entry prev[i][w] should be. We have an edge w->v, so we need to set prev[i][w] in v to min(prew[i-1][k] for all k, but ignore entries with 0) + c (notice the subscript of the array!); we effectively take the cost of a path of length i - 1 that leads to w, and add the cost of the edge w->v. Why the minimum? The vertex w can have many predecessors for paths of length i - 1; however, we want to stay below a cost limit, which greedy minimization at each vertex will do for us. We will need to do this for all i in [1;s-v]. - While calculating the array for a vertex, do not set entries that would give you a path with cost above d; since all edges have positive weights, we can only get more costly paths with each edge, so just ignore those. - Once you reached e and finished calculating pree, you're done with this part of the algorithm. - Iterate over pree, starting with pree[e-s]; since we have no cycles, all paths are simple paths and therefore the longest path from s to e can have e-s edges. Find the largest i such that pree[i] has a non-zero (meaning it is defined) entry; if non exists, there is no path fitting your criteria. You can reconstruct any existing path using the arrays of the other vertices. Now that gives you a space complexity of O(n^3) and a time complexity of O(n²m) - the arrays have O(n²) entries, we have to iterate over O(m) arrays, one array for each edge - but I think it's very obvious where the wasteful use of data structures here can be optimized using hashing structures and other things than arrays. Or you could just use a one-dimensional array and only store the current minimum instead of recomputing it every time (you'll have to encapsulate the sum of edge weights of the path together with the predecessor vertex though since you need to know the predecessor to reconstruct the path), which would change the size of the arrays from n² to n since you now only need one entry per number-of-nodes-on-path-to-vertex, bringing down the space complexity of the algorithm to O(n²) and the time complexity to O(nm). You can also try and do some form of topological sort that gets rid of the vertices from which you can't reach e, because those can be safely ignored as well.
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776440175.84/warc/CC-MAIN-20140707234040-00066-ip-10-180-212-248.ec2.internal.warc.gz
CC-MAIN-2014-23
6,036
64
https://ghc.anitab.org/jamika-d-burge/
code
Dr. Jamika D. Burge is part of the Artificial Intelligence (AI) Design team that supports the conversational agent, Eno, at Capital One. In her role, she serves as Dean of Eno University, a professional development program supporting Capital One’s conversational AI at-scale. Prior to joining Capital One, she served as a tech consultant to DARPA, the Defense Advanced Research Projects Agency, in the Information Innovation Office. She provided technical and management consult for innovative DARPA programs which were funded at over $70 million. Jamika is also Founder and Principal of Design & Technology Concepts, LLC, a tech consultancy that focuses on computer science education & research and inclusive design. She is an authority in research and programming that investigates the intersectionality of black women and girls in computing, which led her to co-found blackcomputeHER.org (pronounced “black computer”), an organization dedicated to supporting computational and design thinking and workforce development for black women and girls. She has consulted for Google, the National Center for Women in Technology (NCWIT), and the American Association of Colleges & Universities (AAC&U). Jamika holds a doctorate in computer science and applications from Virginia Tech. She and her work have been featured in the New York Times and ComputerWorld, and in 2016, she was recognized by HackBright Academy as a Top Tech Leader to Watch. Through additional scholarship, Jamika serves as co-PI for the NSF-funded Building Student Retention through Individuated Guided coHort Training in CS (BRIGHT-CS) project, which is developing a CS ecosystem for girls of color in the New York City and Northern Virginia metro areas. She is also co-PI for Girls Rock TECH!, which, in partnership with Black Girls ROCK!, is developing a leadership curriculum that combines computational thinking with a powerful cultural and gender empowerment model for middle and high school girls.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817819.93/warc/CC-MAIN-20240421194551-20240421224551-00147.warc.gz
CC-MAIN-2024-18
1,977
4
https://bridgelakephotogroup.com/2020/08/16/using-lab-to-make-your-image-pop/
code
Here’s a simple way, using Photoshop, to make your images pop. And you’ll learn a bit about using the colour mode Lab (L=Lightness, a=Channel a, b=Channel b). It’s hard to show the difference, in a post, between the two photos above. To properly see the effect, click first on one photo above and then on the other. Both images should open as Tabs in your browser. Simple click back and forth between the two to see the difference. Let me describe the process. Open your image in Photoshop. You are going to duplicate the image (not the layer). You do this by selecting Image>Duplicate, checking the box which says Duplicate Merged Layers Only (it will be greyed out if there is only one layer), and accepting the suggested name for the image file (the original name with the word copy appended). This will result in a new image file being created. Make sure that the new file is selected and then Image>Mode>Lab Color. Now highlight your original image file/tab. Create a new blank layer (select + at the bottom of the layers panel). Again, go up to Image>Apply Image, but make sure that the Source is the copy image, select b as the Channel, the Layer is Background, the Blend Mode is Multiply at 100%. Select OK. Finally change the blend mode of the new layer to either Overlay or, if the effect is too much, to Soft Light. To see the effect, toggle the visibility of the new layer off and on. Note that you can also change the Opacity of the new layer. You can now delete the image you created by duplicating.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657169.98/warc/CC-MAIN-20230610095459-20230610125459-00741.warc.gz
CC-MAIN-2023-23
1,519
8
https://support.pega.com/question/check-property-null-list
code
I have a list where each object is another list, now I need to get to the second list get the object and check if the property is "", " ", null or is the property is missing, let me know how can I do this. I tried @compares and @equals but both are not working. 1st List - Customer 2nd List - Address Customer.Address.street == "" Customer.Address.street == " " Customer.Address.street = null During mapping from json the "Customer.Address.street may not even be there, so I need to check all the above 4 conditions. ***Edited by Moderator Marissa to update Content Type from Discussion to Question***
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100529.8/warc/CC-MAIN-20231204115419-20231204145419-00636.warc.gz
CC-MAIN-2023-50
601
9
https://community.tableau.com/message/182906?tstart=0
code
Did you try renaming the column, so that it is not last, and see if it appears? Is it missing from extract only, i.e. can you see it if there is no extract? After playing around with it a bit, it turns out to be a bit more complicated than I originally thought. First, it was only alphabetically last among dimensions. Second, I could only get it to work by renaming the column in custom SQL. The new name showed up, even though it too was alphabetically last. However, I was unable to alias the new column with the original name, because Tableau thought that the original column already existed.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392142.20/warc/CC-MAIN-20200527075559-20200527105559-00285.warc.gz
CC-MAIN-2020-24
596
2
https://github.com/openreplay/openreplay
code
Session replay for developers The most advanced session replay for building delightful web apps. OpenReplay is a session replay suite you can host yourself, that lets you see what users do on your web app, helping you troubleshoot issues faster. - Session replay. OpenReplay replays what users do, but not only. It also shows you what went under the hood, how your website or app behaves by capturing network activity, console logs, JS errors, store actions/state, page speed metrics, cpu/memory usage and much more. - Low footprint. With a ~26KB (.br) tracker that asynchronously sends minimal data for a very limited impact on performance. - Self-hosted. No more security compliance checks, 3rd-parties processing user data. Everything OpenReplay captures stays in your cloud for a complete control over your data. - Privacy controls. Fine-grained security features for sanitizing user data. - Easy deploy. With support of major public cloud providers (AWS, GCP, Azure, DigitalOcean). - Session replay: Lets you relive your users' experience, see where they struggle and how it affects their behavior. Each session replay is automatically analyzed based on heuristics, for easy triage. - DevTools: It's like debugging in your own browser. OpenReplay provides you with the full context (network activity, JS errors, store actions/state and 40+ metrics) so you can instantly reproduce bugs and understand performance issues. - Assist: Helps you support your users by seeing their live screen and instantly hopping on call (WebRTC) with them without requiring any 3rd-party screen sharing software. - Feature flags: Enable or disable a feature, make gradual releases and A/B test all without redeploying your app. - Omni-search: Search and filter by almost any user action/criteria, session attribute or technical event, so you can answer any question. No instrumentation required. - Analytics: For surfacing the most impactful issues causing conversion and revenue loss. - Fine-grained privacy controls: Choose what to capture, what to obscure or what to ignore so user data doesn't even reach your servers. - Plugins oriented: Get to the root cause even faster by tracking application state (Redux, VueX, MobX, NgRx, Pinia and Zustand) and logging GraphQL queries (Apollo, Relay) and Fetch/Axios requests. - Integrations: Sync your backend logs with your session replays and see what happened front-to-back. OpenReplay supports Sentry, Datadog, CloudWatch, Stackdriver, Elastic and more. OpenReplay can be deployed anywhere. Follow our step-by-step guides for deploying it on major public clouds: For those who want to simply use OpenReplay as a service, sign up for a free account on our cloud offering. Please refer to the official OpenReplay documentation. That should help you troubleshoot common issues. For additional help, you can reach out to us on one of these channels: - Slack (Connect with our engineers and community) - GitHub (Bug and issue reports) - Twitter (Product updates, Great content) - YouTube (How-to tutorials, past Community Calls) - Website chat (Talk to us) We're always on the lookout for contributions to OpenReplay, and we're glad you're considering it! Not sure where to start? Look for open issues, preferably those marked as good first issues. See our Contributing Guide for more details. Also, feel free to join our Slack to ask questions, discuss ideas or connect with our contributors. This monorepo uses several licenses. See LICENSE for more details.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100016.39/warc/CC-MAIN-20231128214805-20231129004805-00218.warc.gz
CC-MAIN-2023-50
3,490
29
https://www.tealhq.com/resume-example/python-django-developer
code
The ideal length for a Python Django Developer resume should be one to two pages. However, the length of your resume may depend on your experience level and the complexity of your projects. If you are an entry-level developer, one page should be sufficient. For experienced developers with a longer work history and extensive accomplishments, two pages may be necessary. It's important to prioritize the most relevant and recent experience, skills, and achievements and use concise language to describe them. Be sure to customize your resume for each job application, focusing on the skills and experiences most relevant to the specific Python Django Developer role you're applying for. Remember to avoid including outdated or irrelevant information and quantify your accomplishments whenever possible. The best way to format a Python Django Developer resume is to create a clear and concise document that highlights your technical skills, experience, and achievements. Here are some tips and recommendations for formatting a Python Django Developer resume: Ensure consistency in formatting throughout your resume, including font size, typeface, and spacing. Using a consistent format helps make your resume easy to read and navigate, making it more likely that hiring managers will review your entire document. Clear section headings: Clearly label each section of your resume (e.g., "Summary," "Experience," "Skills," "Education") with bold or underlined headings. This helps guide the reader's eye and makes it easier for them to find the information they're looking for. Include a section that highlights your technical skills, such as programming languages, frameworks, and tools. This is especially important for Python Django Developers, as it showcases your expertise in the specific technologies required for the job. Present your work experience in reverse chronological order, starting with your most recent position and working backward. Be sure to include specific examples of projects you have worked on and your contributions to them. This helps demonstrate your ability to apply your technical skills in real-world scenarios. Include your educational background, including any relevant degrees or certifications. This helps demonstrate your commitment to learning and staying up-to-date with the latest technologies. Overall, the key to formatting a successful Python Django Developer resume is to focus on your technical skills and experience, while presenting the information in a clear and concise manner. As a Python Django Developer, it's essential to highlight specific keywords and action verbs in your resume to showcase your skills, experience, and expertise effectively. Here are some important keywords and action verbs you should consider incorporating into your resume: 1. Python: Clearly mention your proficiency in Python programming language, as it is the core skill required for Django development. 2. Django: Emphasize your experience with Django, the high-level web framework that enables rapid development of secure and maintainable websites. 3. Web Development: Showcase your expertise in web development, including both front-end and back-end technologies. 5. RESTful APIs: Highlight your experience in designing and implementing RESTful APIs, which are crucial for communication between the front Writing a resume with little to no experience as a Python Django Developer can be challenging, but there are ways to showcase your skills and potential to hiring managers and recruiters. Here are some tips to help you craft an effective resume: Highlight your technical skills: Even if you don't have direct experience with Python Django, make sure to highlight your technical skills that are relevant to the field. This can include programming languages such as Python, Java, or C++, web development frameworks like Flask or Django, and database management systems like MySQL or PostgreSQL. Showcase relevant projects: If you've worked on any projects, either in school or as part of your previous roles, that are related to Python Django development, make sure to include them on your resume. This can include web application development, database design, or software engineering. Explain your role in these projects and the impact your contributions had on the final outcome. Highlight education and certifications: If you have a degree in a relevant field, such as computer science or software engineering, be sure to mention it. Additionally, include any Python Django certifications or courses you've completed, such as the Django for Beginners course from platforms like Udemy or Coursera. Demonstrate your passion for Python Django: Include any personal projects or contributions to open-source projects related to Python Django development. This can demonstrate your passion for the field and your willingness to learn and contribute to the community. Overall, focus on showcasing your technical skills, relevant projects, and passion for Python Django development to make your resume stand out to potential employers.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817780.88/warc/CC-MAIN-20240421132819-20240421162819-00116.warc.gz
CC-MAIN-2024-18
5,063
24
https://cwiki.apache.org/confluence/display/CLICK/Feature+Concepts+and+Roadmap
code
This page provides a place to reference new feature concepts which may be included in Clicks roadmap. Using a WIKI for concept exploration and discussion is particularly useful for features which: - are complex; - have significant impacts on the design of the framework; or - may affect backward compatibility. Please note features are often discussed in the Click development news group firstname.lastname@example.org and in JIRA https://issues.apache.org/jira/browse/CLK
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00104.warc.gz
CC-MAIN-2023-14
472
6
https://technologypartners.net/jobs/software-engineer-ii-java-jenkins-9178/
code
Technology Partners is currently seeking a talented Software Engineer II (147287). Do you have experience with Java, Jenkins and AWS? Let us help you make your next big career move a reality! What You Will Be Doing: You will be involved in the development of API Marketplace. You will work with technical leadership to design, develop and modify many different technical elements. Required Skills & Experience: - Must be committed to incorporating security into all decisions and daily job responsibilities - 3+ years of experience with Python and/or Groovy and/or Java-related coding languages - 3+ years experience with Jenkins - 2+ years experience in AWS environment and Terraform for infrastructure as code - Experience with Maven, Git/BitBucket, Jenkins, Webpack, NPM - Strong communication skills, including the ability to effectively communicate with people of varying technical knowledge - Strong troubleshooting skills. Able to resolve issues and support configuration issues for developers independently - Must be able to work in a fast-paced production environment and have the ability to handle multiple tasks - Experience with Agile development methodologies and tools such as Scrum, JIRA, and Confluence - Must have experience in full lifecycle development and end-to-end testing - Must have the ability to effectively collaborate and work with others in a remote work environment - Must demonstrate the ability to be flexible with changing priorities and requirements Desired Skills & Experience: - Bachelor’s Degree in Computer Science, Computer Information Systems, Management Information Systems, or related field preferred - Experience in building customer components - Good experience with editable templates, sling models, and content fragments - Knowledge of JIRA/Confluence and ServiceNow - Understanding of how to develop reusable code We are interested in every qualified candidate who is eligible to work in the United States. However, we are not able to provide sponsorship at this time or accept candidates who would require a corp-to-corp agreement. If this position sounds like you, WE SHOULD TALK! We realize our people are our most valuable asset, that is why we offer the following benefits: - Health, Dental, and Vision insurance - 401(k) retirement plan - Long and Short-Term disability - Life insurance - Direct deposit - Referral program Your better future is ready, and we want to put the right tools in your hands to get you there. Let’s go! Keywords: jenkins, aws, Java, terraform, python, groovy, aws Looking for more opportunities with Technology Partners? Check out technologypartners.net/jobs! Technology Partners is an Equal Opportunity Employer. Technology Partners does not discriminate on the basis of race, color, religion, sex, national origin, age, disability or any other characteristic protected by applicable state or federal civil rights laws.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711045.18/warc/CC-MAIN-20221205200634-20221205230634-00489.warc.gz
CC-MAIN-2022-49
2,903
35
https://qa.fmod.com/t/start-offset-automation-not-working-with-relative-transitions/17762
code
I’ve simplified my event to showcase the bug I’m experiencing. In this scenario I’m transitioning from region A to region B, and back again, with a relative offset. Normally I’d have both set to be synchronous, but I have async enabled so that I can take advantage of a fade out envelope when transitioning. To maintain the relative position, I’ve automated the start offset to match the relative position of the transition region. If I drop the play head in the middle of the file and hit play, the file will start from the middle of the file as expected. However, if I play the event from the start, then trigger the transition, when the playhead jumps to the relative position in the destination region, instead of adjusting the start offset, it plays from the start of the file. This feels like a bug, but I could be missing something?
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585177.11/warc/CC-MAIN-20211017113503-20211017143503-00136.warc.gz
CC-MAIN-2021-43
849
4
https://www.macupdate.com/app/mac/22633/quotient
code
Displays the quotient of any division equation (was X-Quotient). A quotient may extend off the screen of your calculator, such as when you divide 1 by 7. It turns out, however, that at least some part of what follows the decimal point repeats indefinitely. This program writes the quotient of any integer division, to a user-specified level of decimal precision, into a text box, from where it can be copied to the Clipboard, or exported (along with the rest of the calculation parameters) to a text file. A repetend locator rounds out the program's functionality, employing an algorithm with the ability to determine a repeating end-segment portion in any finite-length string, which the result text box then displays. What's new in Quotient Fixed a bug that prevented the online help from loading. Made the main window resizable once more. Increased the height of the main window. Added an adjustable slider between the quotient and repetend text views. Rearranged the main window slightly. Fixed an oversight in the AppleScript terminology. Indexed the online help, as it should have been initially. Updated the online help to full-fledged XHTML. Window is now a fixed size; this makes sense from the standpoint of the width of the text fields, and also fixes some sizing issues with the split view. Remade in Objective-C 2.0; the program is no longer compatible with systems prior to Mac OS X version 10.5 ("Leopard"). Updated contact information in online help. Other minor changes to both UI and code. Join over 500,000 subscribers. Subscribe for our newsletter with best Mac offers from MacUpdate.
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488620.24/warc/CC-MAIN-20191206122529-20191206150529-00253.warc.gz
CC-MAIN-2019-51
1,604
17
http://www.openmamba.org/distribution/distromatic.html?tag=devel&pkg=recode.i586
code
|Description:||The recode program has the purpose of converting files between various character sets and usages. When exact transliterations are not possible, as it is often the case, the program may get rid of the offending characters or fall back on approximations.| Let us coin the term charset to represent, without distinction, a character set "per se" or a particular usage of a character set. This program recognizes or produces around 150 such charsets. Since it can convert each charset to almost any other one, many thousands of different conversions are possible. This tool pays special attention to superimposition of diacritics for French representation. This orientation is mostly historical, it does not impair the usefulness, generality or extensibility of the program. |Requires:||/bin/sh /bin/sh /sbin/install-info /sbin/install-info libc.so.6 libc.so.6(GLIBC_2.0) libc.so.6(GLIBC_2.1) libc.so.6(GLIBC_2.3) librecode[=3.6-2mamba] librecode.so.0 | |RPM requirements:||bash texinfo glibc librecode | |Required by:||festival-freebsoft-utils(i586) | |Build required by:| |Filenames:||/usr/bin/recode /usr/share/info/recode.info-1.gz /usr/share/info/recode.info-2.gz /usr/share/info/recode.info-3.gz /usr/share/info/recode.info-4.gz /usr/share/info/recode.info-5.gz /usr/share/info/recode.info-6.gz /usr/share/info/recode.info-7.gz /usr/share/info/recode.info.gz /usr/share/locale/da/LC_MESSAGES/recode.mo /usr/share/locale/de/LC_MESSAGES/recode.mo /usr/share/locale/el/LC_MESSAGES/recode.mo /usr/share/locale/es/LC_MESSAGES/recode.mo /usr/share/locale/fr/LC_MESSAGES/recode.mo /usr/share/locale/gl/LC_MESSAGES/recode.mo /usr/share/locale/it/LC_MESSAGES/recode.mo /usr/share/locale/nl/LC_MESSAGES/recode.mo /usr/share/locale/pl/LC_MESSAGES/recode.mo /usr/share/locale/pt/LC_MESSAGES/recode.mo /usr/share/locale/sl/LC_MESSAGES/recode.mo /usr/share/locale/sv/LC_MESSAGES/recode.mo /usr/share/man/man1/recode.1.gz |
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698238192/warc/CC-MAIN-20130516095718-00073-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
1,925
8
http://www.occupypoetry.net/my_pretty_rose_tree
code
Do not be too moral. You may cheat yourself out of much of life. So aim above morality. Be not simply good; be good for something. My Pretty Rose Tree by William Blake A flower was offered to me; Such a flower as May never bore. But I said I've a Pretty Rose-tree. And I passed the sweet flower o'er. Then I went to my Pretty Rose-tree: To tend her by day and by night. But my Rose turnd away with jealousy: And her thorns were my only delight.
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296603.6/warc/CC-MAIN-20150323172136-00212-ip-10-168-14-71.ec2.internal.warc.gz
CC-MAIN-2015-14
444
11
http://journeywithcates.blogspot.com/2007/07/another-great-weekend-we-had-lots-of.html
code
Tuesday, July 3, 2007 Happy almost 4th Another great weekend! We had lots of fun at Spamalot in Dallas. It is a very, very funny show. I wasn't sure what to expect since I am not a big Monty Python fan. The cast was great with many off the wall jokes. We enjoyed our last family visit to San Antonio. Andrew had a good birthday with plenty of Aggie stuff to keep him happy. When we returned back to Waco he got a Wii game system. I like the bowling game :) We are looking forward to my parents annual 4th of July cookout tomorrow. This will be the first year since we have been married that we have actually been in town to go. Have a Happy 4th!
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591837.34/warc/CC-MAIN-20180720213434-20180720233434-00209.warc.gz
CC-MAIN-2018-30
645
6
http://gupta9665.com/whats-new-in-solidworks-2017-clean-feature-tree/
code
SOLIDWORKS 2017 Pre-release PR1 is available to download. There has been many enhancements to the new version which have help many users to improve their current workflow and overcome some of the earlier limitations. I’ll be sharing/explaining some of the cool enhancements which I feel would be helpful to majority of SOLIDWORKS users. Clean Feature Tree This is a small but yet powerful enhancement. Like other users, I used to struggle with feature tree where configuration and display state would show up making the feature tree look clustered. But now with the new enhancement one can clean off the feature tree on just click of button. If your model has only one configuration then you can hide both configuration and display state name by selecting the option ” Do not show Configuration/Display State Names if only one exists” in the Feature Tree Display setting. Since this is a document based property, one can set it in the templates and all new files would show up clean. Stay tuned for more exciting updates and enhancements in SOLIDWORKS 2017
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815934.81/warc/CC-MAIN-20180224191934-20180224211934-00168.warc.gz
CC-MAIN-2018-09
1,062
6
http://www.gunslot.com/pictures/cowbell
code
A healthy girl like that eats when the bell rings I would like to lick that 5 ways to Sunday. the cowbell or the girl,or the statue in the pond on her hands and knees? and the only remedy is MORE COWBELL! I think it's a dinner bell and she is definately GOOD EATIN
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171162.4/warc/CC-MAIN-20170219104611-00094-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
264
5
https://forum.uipath.com/t/what-is-the-filter-expression-for-get-exchange-mails/488876
code
For example i want to get one date to another date in get exchange mails Like for example i give dates for input dialogue box and another date in dialogue box so my final output should be stored in one expression, like for ex i give input as 24-10-2022 to 28-10-2022 so for this how to write expression in exchange get mails to get from inbox? Also i want to get dates like from sent items so for this i have to give input date only so for this how to write filter expression Here are some of the examples You can also try this get the mails and loop through with for each (property as Net.Mail.Message) and use if condition inside for each and give your condition and process it You can filter using “received: >=10/24/2022 AND received: <=10/28/2022”. you can get the values from the dailog box and then do a concat like “receivd: >=” + value1 + “AND received: <=” + value2 Or Alternately get all mails and loop through and use if condition thanks for the reply, sorry for the delayed response but im getting dates as from 25 not 24 Try changing the date in filter and also check if there are any mails on that day. Please mark solution and close the thread as others can also get help Forum FAQ - How to mark a post as a solution This document is part of our beginners guide. This article will teach you how to properly mark a post as a solution. We were closely following our last UiPath Forum feedback round! topic and carefully extracted all the bits of feedback that was provided. As such, we would like to tackle the topic of solutions on our Forum and how to properly use them. When is the topic resolved? The topic can be considered resolved when the topic author has fo… “received: >=10/24/2022 AND received: <=10/28/2022” im getting output in mentioned excel format but could you let me know like how to convert? dd/mm/yyyy hh:mm:ss You can use before pasting write datevar.Tostring(“dd/mm/yyyy hh:mm:ss”) and also make sure in excel the cell format is proper If not add a '(single quote infront of the date so that excel thinks it as string and will not reformat Can we close this please. As for one type of question one thread needs to be used and then closed. If you have any more questions feel free to open new threads. That way answers can be segregated and made available for others as well On top I have posted how to mark a solution and close. Please follow the steps Still he did not get the Answer not correctly The data is not filtered properly right ? Can you look over post of Yes please. I did and its more on getting mails between dates. I dont mind giving the same on elaboration but crux is solved I believe okay thanks i will create new topic for this For dates you need not as well satish. This is more to give better solution for the problem that anyone has. Happy Automation. Happy to solve the issues any time. @sathish_Kumar6 thanks for the reply again on clarifying it i know im not good at dates so im sorry Totally understandable. We are all on same page. We learn daily. For Date check out this expression DateTime.ParseExact(StringInput.ToString,"ddd, dd MMM yyy HH:mm:ss K",System.Globalization.CultureInfo.InvariantCulture).ToString("dd/MM/yyyy hh:mm:ss") Check out the tutorial I would like to present this post who are looking for the expressions related to the Date Format, String manipulation and LINQ Basic of Date formatting In this below topic, you will learn about Date Formats Convert datetime to week of year My recent post on datatable [All About Datatable- UiPath] ( All about Datatable - UiPath) was received well in forum with great feedbacks and many suggested to provide some tutorial on topics which are discussed often in our forum In regards to that I would like to share the commonly used expression for DATETIME conversion Let’s get started one by one 1. Get Current Date in string format (Output - string type - “dd/MM/yyyy hh:mm:ss”) **To get only current da… thanks for the reply…i tried but i got some assign errors You can create the new topic for that error.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500151.93/warc/CC-MAIN-20230204173912-20230204203912-00078.warc.gz
CC-MAIN-2023-06
4,060
51
https://communities.vmware.com/t5/VMware-vCenter-Discussions/Network-Speed-Issues/m-p/2713398
code
Guys and Gals, I am not sure if i am posting this in the right department here or not, heck i might need to post on a cisco forum but here it goes. I have 4 ESXI hosts that each have 4x 10gb connections on them. These connections go back to a Cisco Nexus 5508 switch where the ports are setup as trunks. The 4 connections on the server are setup in a distributed switch where i have my VLAN's configured and all of that seems to be working fine. My storage controller has a similar setup, 2x 10gb ports that are trunked back to the Nexus (same switch as ESX) and it is using hybrid SSD and 10k drives. Looking just at the numbers my network between the hosts and the storage should be smoking fast, but it isn't. I have been trying, without success, for 3 days now to create a new pool in Horizon View and it keeps timing out with the operation took longer blah blah blah error and i see in vcenter that the process in creating the replica is taking about an hour and a half. The original disk it is replicating is about 50gb and given the amount of bandwidth i have allocated to this, the process should take >10 mins. I am banging my head against a wall trying to track this down and don't know where to even start looking. Is it a vcenter setting a horizon setting or a misconfigured switch along the way? I might add we are using a 3560 L3 switch as a gateway and vlan router, so my first thought is maybe for some reason the traffic is leaving the 10g switch and visiting the 1g switch for a quickie then being sent back home where it came from to be directed out another port. I have checked and my nexus is setup for jumbo frames, my 1g switch is not, but the jumbo traffic shouldn't ever cross over there. I will take any help you guys could give and will provide any logs you need as well, just let me know. Can you provide a network diagram of your setup? I think visually. From what you've explained it would seem that the 3560 is routing between the storage VLAN and others? Or how is the storage connected? Are you using FCOE, ISCSI, or NFS? When this is happening try monitoring the ports on the nexus and the 3560. Then you've see if the 3560 is routing storage packets. If that is the case, I don't think the frame buffers on the 3560 are sufficient for storage traffic. Do you only have 1x 10G connection per host?
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817249.26/warc/CC-MAIN-20240418222029-20240419012029-00382.warc.gz
CC-MAIN-2024-18
2,331
3
https://electrodynamics.org/publications/dirac-wire-fermionic-waveguides-longitudinal-spin
code
The interplay of photon spin and orbital angular momentum (OAM) in the optical fiber (one-dimensional waveguide) has recently risen to the forefront of quantum nanophotonics. Here, we introduce the fermionic dual of the optical fiber, the Dirac wire, which exhibits unique electronic spin and OAM properties arising from confined solutions of the Dirac equation. The Dirac wires analyzed here represent cylindrical generalizations of the Jackiw-Rebbi domain wall and the minimal topological insulator, which are of significant interest in spintronics. We show the unique longitudinal spin arising from electrons confined to propagation in a wire, an effect which is fundamentally prohibited in planar geometries. Our work sheds light on the universal spatial dynamics of electron spin in confined geometries and the duality between electronic and photonic spin.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00668.warc.gz
CC-MAIN-2022-33
861
1