url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
https://forum.purseblog.com/threads/so-are-the-cotton-signature-carlys.123520/
|
code
|
in stores now? The closest Coach botique to me is Rochester, NY. Does anyone know if they are there? Also is that the official name of them: cotton signature carly or is it something different? I'm thinking of this bag: From this thread: http://forum.purseblog.com/coach/my-cotton-signature-large-carly-in-chocolate-pics-115823.html In the Chocolate large.... Do they come in any other colors the cottons? ALSO What's this bag? is it party of the cotton line?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890893.58/warc/CC-MAIN-20180121214857-20180121234857-00673.warc.gz
|
CC-MAIN-2018-05
| 459
| 1
|
https://learn.microsoft.com/en-us/cpp/cpp/compiler-com-support?view=msvc-170
|
code
|
Compiler COM Support
The Microsoft C++ compiler can directly read component object model (COM) type libraries and translate the contents into C++ source code that can be included in the compilation. Language extensions are available to facilitate COM programming on the client side for desktop apps.
By using the #import preprocessor directive, the compiler can read a type library and convert it into a C++ header file that describes the COM interfaces as classes. A set of
#import attributes is available for user control of the content for the resulting type library header files.
You can use the __declspec extended attribute uuid to assign a globally unique identifier (GUID) to a COM object. The keyword __uuidof can be used to extract the GUID associated with a COM object. Another
__declspec attribute, property, can be used to specify the
set methods for a data member of a COM object.
A set of COM support global functions and classes is provided to support the
BSTR types, implement smart pointers, and encapsulate the error object thrown by
END Microsoft Specific
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474482.98/warc/CC-MAIN-20240224012912-20240224042912-00179.warc.gz
|
CC-MAIN-2024-10
| 1,075
| 10
|
https://www.winehq.org/pipermail/wine-users/2011-November/098982.html
|
code
|
I was successful and it seems IE8 Installed okay. However I still cannot find any clue how to actually launch IE? No icons, no idea how to add one to menu etc. Also browsing files in file browser I cannot see any of the files? Don't know where they are.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886476.31/warc/CC-MAIN-20180116164812-20180116184812-00080.warc.gz
|
CC-MAIN-2018-05
| 253
| 1
|
https://johncmorrissey.wordpress.com/2017/04/21/remote-nano-server-admin-quick-few-commands-needed-to-get-you-access-to-box/
|
code
|
Recently was given a project with a proof of concept involving the use of Windows 2016 nano servers (don’t even know if that’s there official title :)). Anyhow as not quite your conventional windows servers took a wee bit of lurking to find what i needed to get access.
Bit of background:
So Nano server comes with a rebuilt subset of Windows Powershell and they’ve called it Core PowerShell. Feature-set wise seems to have everything i need, full remoting capability, language compatibility etc.
As it does come with Windows Powershell Remoting it indeed is our gateway to access the server.
i) Need to have administrator level privileges to the Nano Server
ii) Add its IP to the managed machine’s trusted hosts(assuming 192.168.1.1 is the Nano Server’s IP) to do
PS c:\> set-Item WSMAN:\\localhost\Client\TrustedHosts “192.168.1.1”
Next you can start an interactive remoting session:
PS C:\NanoServer> $ip = “192.168.1.1”
PS C:\NanoServer> $user = “Administrator”
PS C:\NanoServer> $Enter-PSSession -ComputerName $ip -Credential $user
After that you are good to go, can run commands as if you were entering directly on the nano server console eg
[192.168.1.1]: PS c:\users\test\documents> ipconfig
To get the full list of commands available
[192.168.1.1]: PS c:\users\test\documents> Get-Command -CommandType Cmdlet
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575168.82/warc/CC-MAIN-20190922053242-20190922075242-00312.warc.gz
|
CC-MAIN-2019-39
| 1,339
| 15
|
https://forum.manjaro.org/t/nordvpn-creating-warning-duplicate-ipv6-messages/108773
|
code
|
After upgrade to kernel 5.15.32 and rebooting OS. I noticed that I was getting a lot of the following messages:
NetworkManager: [1650363659.0309] ipv6ll[f4715da0b2db63b4,ifindex=2]: chan
ged: no IPv6 link local address to retry after Duplicate Address Detection failures (back off)
Apr 19 11:21:09 gary-KDE NetworkManager: [1650363669.0321] platform-linux: do-add-ip6-address[2: fe
80::122d:3842:54f2:982c]: failure 13 (Permission denied)
These stop and start when disconnected / connected to a Nordvpn server.
I have contacted Nordvpn support who state that they do not support IPv6, and that these are due to the Nordvpn.bin app.
These do not appear to be impacting my system as yet just filling up my logs. Is this something that can be passed to the app developer to review.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00143.warc.gz
|
CC-MAIN-2022-40
| 778
| 8
|
https://www.romhacking.net/forum/index.php?topic=26507.380
|
code
|
Here's a quick at how the GUI in v16 (and onwards) will look like:
(EDIT: link broken)
(Ignore the green square, ScreenToGif does that sometimes).
All the new stuff has been discussed in the thread already, no surprises there. Though the tabs do make it look like it grew a lot.
The GUI has taken a lot more time than expected, and it's still not quite done. Several of the new features haven't actually been implemented yet. I'm also waiting for NectarHime's script text files (I already inserted DuoDynamo's retranslation as it is, but since NectarHime agreed to edit script files himself, I agreed to add his "second draft" of the script as a separate option too), and also Metalwario64's final set of mugshots.
So yeah, still lots of stuff to do, but I'm slowly getting there.
Hi. I encounter a problem with cheat code 2 after apply Complete patch. When I put cheat code 2 to start game as Zero, nothing happened, still Falcon X . My emulator is ePSXe.
Using cheat code 2 in v15, Zero is "selectable" after the intro stage, doesn't make him playable in the intro stage. It will in the next version though, that's already taken care of.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990449.41/warc/CC-MAIN-20210514091252-20210514121252-00334.warc.gz
|
CC-MAIN-2021-21
| 1,139
| 8
|
http://farmi.esmtp.biz/wechsler/How-To-Write-Aggregate-Function-In-Sql.html
|
code
|
Summary: in this tutorial, you will learn about the SQL aggregate functions including AVG(), COUNT(), MIN(), MAX(), and SUM(). An SQL aggregate function calculates on a set of values and returns a single value. For example, the average function ( AVG) takes a list of values and returns the average. Because an aggregate function operates on a set of values, it is often used with the GROUP BY.
How to Write a Multiplication Aggregate Function in SQL Posted on September 21, 2018 September 16, 2019 by lukaseder Everyone knows the SQL SUM() aggregate function (and many people also know its window function variant).
An aggregate function allows you to perform a calculation on a set of values to return a single scalar value. We often use aggregate functions with the GROUP BY and HAVING clauses of the SELECT statement. The following are the most commonly used SQL aggregate functions.You cannot write custom aggregates outside of the CLR. The only type of functions you can write in pure T-SQL are scalar and table valued functions. Compare the pages for CREATE AGGREGATE, which only lists CLR style options, with CREATE FUNCTION, which shows T-SQL and CLR options.SQL aggregate functions are inbuilt functions that are used for performing various operations in data. Aggregate Functions are used for performing operations on multiple rows of a particular column and result in a single value. An aggregate function allows you to perform the calculation on a set of values to return the single scalar value.
If you want to exclude duplicate values from the aggregate function results, use the DISTINCT keyword. The ALL keyword includes even duplicates. If nothing is specified the ALL is assumed as the default. Aggregate functions can be used in conjunction with other SQL clauses such as GROUP BY; Brain Teaser. You think aggregate functions are easy.Read More
Note. Deploying a SQL Server Project in MicrosoftVisual Studio registers an assembly in the database that was specified for the project. Deploying the project also creates a user-defined aggregate in the database for all class definitions annotated with the SqlUserDefinedAggregate attribute. For more information, see Deploying CLR Database Objects.Read More
Summary: in this tutorial, you will learn how to use the PostgreSQL aggregate functions such as AVG(), COUNT(), MIN(), MAX(), and SUM(). Introduction to PostgreSQL aggregate functions. Aggregate functions perform a calculation on a set of rows and return a single row. PostgreSQL provides all standard SQL’s aggregate functions as follows.Read More
Summary: in this tutorial, you will learn how to use the SQL Server MAX() function to find the maximum value in a group. Introduction to the SQL Server MAX() function. SQL Server MAX() function is an aggregate function that returns the maximum value in a set. The following shows the syntax of the MAX() function:. MAX(expression) The MAX() function accepts an expression that can be a column.Read More
Summary: in this tutorial, you will learn about MySQL aggregate functions including AVG COUNT, SUM, MAX and MIN. Introduction to MySQL aggregate functions. An aggregate function performs a calculation on multiple values and returns a single value. For example, you can use the AVG() aggregate function that takes multiple numbers and returns the average value of the numbers.Read More
Using these functions, you can implement and deploy aggregate functions that do not belong to aggregate functions supported by the system. These functions are a special case of user-defined functions, which will be described in detail in Chapter “Stored Procedures and User-Defined Functions”.Read More
SQL Aggregate functions with real life examples: In this section i will give you SQL aggregate functions with its explanation. I will try to give you the different kind of real industry examples of SQL Aggregate functions. Aggregate functions are functions which has multiple inputs but it gives the aggregated result for multiple rows of the table.Read More
Aggregate functions are the built-in functions in SQL. They are used for specific operations like to compute the Average of the numbers, Total Count of the records, Total sum of the numbers etc. These are also called Group functions because these functions apply to the group of data.Read More
Using the SQL aggregate functions in Access, you can determine various statistics on sets of values. You can use these functions in a query and aggregate expressions in the SQL property of a QueryDef object or when creating a Recordset object based on an SQL query. Avg Function. Count Function. First, Last Functions. Min, Max Functions. StDev.Read More
Unlike other SQL aggregate functions, the SUM() function accepts only the expression that evaluates to numerical values. You can specify either ALL or DISTINCT modifier in the SUM() function. The DISTINCT modifier instructs the SUM() function to calculate the total of distinct values, which means the duplicates are eliminated.; The ALL modifier allows the SUM() function to return the sum of.Read More
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703509104.12/warc/CC-MAIN-20210117020341-20210117050341-00596.warc.gz
|
CC-MAIN-2021-04
| 5,082
| 13
|
https://arslantariq.com/laravel-database-schedule/
|
code
|
Want to manage Database Schedule with UI? Here is Laravel Database Schedule is a package to schedule tasks through a UI dashboard without having to redeploy your application.
But you’ll still need to write the scheduled task commands, once created, you can use the provided
/schedule endpoint to configure a task’s schedule:
- Manage scheduled tasks via a GUI dashboard (create, edit, delete, disable)
- Custom authentication logic for dashboard access using Laravel gates
- Configurable parameters via UI
- Configure Before and after webhook endpoints
- Send task output via email
- See command run history from the dashboard
- Configure job rules for no overlap, executing on one server, run even during maintenance mode, etc.
composer require robersonfaria/laravel-database-schedule php artisan migrate
✌ Environment variables
You can set the following environment variables to configure schedules:
- SCHEDULE_TIMEZONE : The default is the same configured for the application, but if you need the schedules to run in a different timezone, it is possible to configure it with this variable
- SCHEDULE_CACHE_DRIVER : The default is
- SCHEDULE_CACHE_ENABLE : The default is disabled when
APP_DEBUG=trueand enabled when
You can learn more about this package, get full installation instructions, and view the source code on GitHub.
Read Also : Migrate One Database to Another in a Laravel Project
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648465.70/warc/CC-MAIN-20230602072202-20230602102202-00720.warc.gz
|
CC-MAIN-2023-23
| 1,400
| 19
|
https://sharepoint.stackexchange.com/questions/148696/sharepoint-online-backup-site
|
code
|
I got new issue from my client. He want to back up sharepoint site for restore, if something is going wrong. And the challenge is not to use 3rd party solution(MetaVis, AveDoc). Is there any other way to get back up? I think about to write my own application for backing up,maybe using CSOM, but i don't find a way to connect SharePoint storage(site description schema), where could I start? Please, share your experience if you had challenges like that.I want to find programmatic solution for this challenge.
From this link, we can see there are only 4 ways to restore data in SharePoint Online:
Site Collection Restore
- Backups are taken every 12 hours, and are kept for 14 days.
- You must contact technical support to perform a site collection restore.
- The data will be restored to the same url that it was taken from. You cannot restore to a different url, and you will lose any current data.
3rd Party Solutions
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657181335.85/warc/CC-MAIN-20200716021527-20200716051527-00323.warc.gz
|
CC-MAIN-2020-29
| 921
| 7
|
https://mail.python.org/pipermail/python-list/2015-September/696356.html
|
code
|
ImportError: No module named multiprocessing
jlmora_ at hotmail.com
Wed Sep 9 23:58:47 CEST 2015
I have a problem to import a multiprocessing module. I am using Python 2.7 and Windows 8. The program (.py) is executed since MS-DOS console.
If I run the program the output is "ImportError: No module named multiprocessing".
If I execute "import multiprocessing" on Python Command Line I don't have problems, but It isn't useful for my program, because I need to import since my own file .py
What do you recommend me?. I need to get benefit from the process-based parallelism and to use several processors.
Thank you so much
Julio Mora Olivares
Estudiante Ingeniería Civil Industrial
Diploma en Transporte y Logística
Pontificia Universidad Católica de Chile
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Python-list
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526506.44/warc/CC-MAIN-20190720091347-20190720113347-00009.warc.gz
|
CC-MAIN-2019-30
| 872
| 15
|
https://boards.na.leagueoflegends.com/en/c/gameplay-balance/6uf843dj-now-that-nullifying-orb-exists?comment=00080000
|
code
|
Could you please take a look at Hexdrinker?
I know that MR runes are getting removed and some mages will benefit from that, but I’m kind of tired of Hexdrinker. If your opponent decides to get it early, he’s pretty much impossible to kill for you (AP champion of course) unless: 1) you are Cassiopeia; 2) you are 10 kills ahead and you have guise, sorcs and void; 3) Hexdrinker is on CD.
Now that an ad champ can potentially have TWO magic damage shield, isn’t it time to change hexdrinker? It’s already an overloaded item with a ridiculous cost efficiency on purchase and it should not have a shield (early game that 180 damage mitigation is a lot) on top of MR.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250616186.38/warc/CC-MAIN-20200124070934-20200124095934-00177.warc.gz
|
CC-MAIN-2020-05
| 671
| 3
|
https://blog.vdice.io/1st-lock-period-review/
|
code
|
First Lock Period Review
Last week was big for vDice. It was the 1st week that the entire Smart Contact system was really tested. The below is a lock period review for this first time.
The vDice platform and vSlice (VSL) token work via interacting Ethereum Smart Contracts.
There are the Game Smart Contracts. You can see the addresses at the vDice site. These feed profits through to a single Smart Contract, called the ProfitContainer.
The address for the ProfitContainer Smart Contract is here: 0x51FFC1b089392a5bb65BF24EAf04d07D0e6F88B5#internaltx
All future games will also feed profits through to the same ProfitContainer.
The ProfitContainer Smart Contract proved itself. Holders of VSL were able to withdraw their profits successfully.
There was a small hiccup. The UI notification for the ProfitContainer was slightly out of sync with the actual Smart Contract. So users were notified that the lock period was active, slightly before withdrawals were actually possible.
This has been rectified. Though there are 5 days in which to withdraw profits. So it was not a major issue.
A couple of important things to remember:
- Profits do not rollover to the next epoch. They do not accumulate. They are available for you to withdraw in a certain epoch. Then it’s over, until the next epoch lock period.
- You can only withdraw profits 1x per epoch.
- Check the epoch period here.
We received a lot of feedback and requests about the ProfitContainer and the system. We listened diligently to what our users were saying.
As a result we are building a ProfitContainer calculator. This will allow users to calculate their profits in advance, prior to withdrawal, based on how many VSL they hold.
Overall the system performed well and certainly as expected.
We look forward to releasing more user friendly tools. This will ensure it is a simple to interact with the overall system as possible, for a non-technical user base.Follow us on Social Media
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123632.58/warc/CC-MAIN-20170423031203-00355-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 1,950
| 17
|
https://github.com/Area128/PJON
|
code
|
PJON® (Padded Jittering Operative Network) is an Arduino compatible, multi-master, multi-media network protocol. It proposes a Standard, it is designed as a framework and implements a totally software emulated network protocol stack that can be easily cross-compiled on many MCUs and architectures like ATtiny, ATmega, ESP8266, ESP32, STM32, Teensy, Raspberry Pi, Linux, Windows x86 and Apple machines. It is a valid tool to quickly and comprehensibly build a network of devices. Visit wiki and documentation to know more about the PJON protocol.
PJON is used in thousands of devices and its community has spread worldwide because of the following 6 key factors:
- New technology: PJON is an experimental network protocol stack crafted in 7 years of research and experimentation. It was originally developed as an open-source alternative to i2c and 1-Wire but during development its scope and features have been extended to cover use cases where IP is generally applied. PJON has been engineered to have a variable footprint (3.5-8.2 kB program memory) and overhead (5-22 bytes per packet) depending on its configuration.
- Multi-media support: PJON packets can be transmitted on a wide range of media and protocols like TCP, UDP, Serial, RS485 and LoRa. The PJON network protocol stack specifies and implements also the PJDL wired data link able to communicate data through a single common conductive element shared by up to 255 devices, PJDLR wireless data link compatible with many ASK/FSK/OOK radio modules, and also PJDLS wireless data link able to communicate through light pulses using off the shelf LEDs and laser diodes easing integration and enabling a lot of applications.
- Increased security: Devices using Ethernet/WiFi are often vulnerable to ransomware, illegal cyber warfare and private data leak. PJON has been engineered to enhance security not necessarily implementing the standard network protocol stack together with its vulnerabilities where it is not absolutely necessary offering a set of alternatives for many use cases.
- Increased reliability: Many protocols massively applied worldwide like 1-Wire, i2c and CAN expose dangerous vulnerabilities, their error detection algorithms are weak and they are not resilient to interference. The PJON network protocol stack is based on years of analysis and study not to make the same repeated mistakes present in most alternatives and provide with a set of simpler and more efficient solutions.
- High flexibility: PJON is totally software-defined and its implementation is designed to be easily extensible. it builds out-of-the-box in all supported devices and operates transparently on top of any supported protocol or medium.
- Low cost: Without any additional hardware needed to operate, minimal network wiring requirements and direct pin-to-pin or LED-to-LED communication, PJON is extremely energy efficient, cheap to be implemented and maintained. This implementation is kept updated and meticulously tested thanks to the strong commitment of its growing community of end users, testers and developers.
- Cross-compilation support with the interfaces system calls abstraction
- Multi-media support with the strategies data link layer abstraction
- Master-slave or multi-master dynamic addressing
- Hot-swap support, no need of system reset or shut down when replacing or adding devices
- Configurable synchronous and/or asynchronous acknowledgement
- Configurable 2 level addressing (device and bus id) for scalable applications
- Configurable 1 or 2 bytes packet length (max 255 or 65535 bytes)
- Configurable CRC8 or CRC32 table-less cyclic redundancy check
- Packet manager to handle, track and if necessary retransmit a packet sending in background
- Error handling
- PJON v3.0
- PJON Acknowledge v1.0
- PJON Dynamic addressing v2.0
- PJDL v2.0 - PJDLR v2.0 - PJDLS v2.0 - TSDL v2.0
- ModuleInterface - easy config and value sync between IoT modules by Fred Larsen
- PJON-cython - cython PJON wrapper by xlfe github user
- PJON-piper - command line wrapper by Zbigniew Zasieczny
- PJON-python - python interface by Zbigniew Zasieczny
- PJON-gRPC - gRPC server-client by Oleh Halytskyi
Researchers are active in many universities worldwide using PJON in different environments. The following list contains all the known published academic studies about PJON:
- Definition and Application of PJON-PLC for sensor networks by Jorge Gómez Segurola
Feel free to send a pull request sharing something you have made that could help, if you want to support this project you can also try to solve an issue. Thanks to the support, the expertise, the kindness and the talent of following contributors, the protocol's documentation, specification and implementation have been strongly tested, enhanced and verified:
Fred Larsen, Zbigniew Zasieczny, Matheus Garbelini, sticilface, Felix Barbalet, Oleh Halitskiy, fabpolli, Adrian Sławiński, Osman Selçuk Aktepe, Jorgen-VikingGod, drtrigon, Endre Karlson, Wilfried Klaas, budaics, ibantxo, gonnavis, maxidroms83, Evgeny Dontsov, zcattacz, Valerii Koval, Ivan Kravets, Esben Soeltoft, Alex Grishin, Andrew Grande, Michael Teeww, Paolo Paolucci, per1234, Santiago Castro, pacproduct, elusive-code, Emanuele Iannone, Christian Pointner, Fabian Gärtner, Mauro Mombelli, Remo Kallio, hyndruide, sigmaeo, filogranaf, Maximiliano Duarte, Viktor Szépe, Shachar Limor, Pantovich, Mauro Zancarlin, Franketto, jzobac, DanRoad.
The PJON project is entirely financed by contributions of people like you and its resources are solely invested to cover the development and maintenance costs; you can make a donation using Paypal, Bitcoin or Ethereum
All the software included in this project is experimental and it is distributed "AS IS" without any warranty, use it at your own risk. Licensed under the Apache License, Version 2.0. PJON® and its brand are registered trademarks, property of Giovanni Blu Mitolo email@example.com
In all cases, when installing or maintaining a PJON network, extreme care must be taken to avoid any danger. When a SoftwareBitBang bus is installed each pin must be protected with a current limiting resistor. When working with an AnalogSampling LED or laser based setup safety glasses must be worn and transceivers must be operated cautiously to avoid potential eye injuries. Before any practical test or a hardware purchase for a wireless OverSampling, ThroughSerial or ThroughLoRa radio setup, compliance with government requirements and regulations must be ensured. When connecting a local bus to the internet using EthernetTCP or GlobalUDP all connected devices must be considered potentially compromised, potentially manipulated or remotely actuated against your will. It should be considered a good practice not to connect to the internet systems that may create a damage (fire, flood, data-leak) if hacked.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526670.1/warc/CC-MAIN-20190720194009-20190720220009-00263.warc.gz
|
CC-MAIN-2019-30
| 6,851
| 34
|
https://freesound.org/people/Littleboot/sounds/147300/
|
code
|
This is a simple recording of open air white noise of a quite room. I've used this in quite digital passes in some of my original music to help make the digital instrument sound a little less sterile. This is a full unedited take. You will want to crop the start and end sections so you don't here me cutting the recorder on and off ect. I hope you can use this if you need it!
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817206.28/warc/CC-MAIN-20240418093630-20240418123630-00842.warc.gz
|
CC-MAIN-2024-18
| 377
| 1
|
http://cruxcollaborative.com/application/files/3515/3495/1222/cruxofit-transcript-020.html
|
code
|
Mahtab: Welcome to episode 20 of The Crux of It. I am Mahtab Rezai, I am the Principal and CEO of Crux Collaborative. We are a User Experience Consulting firm specializing in regulated industries. Today, I'm joined by my colleague Annette Gustafson. Hi Annette.
Mahtab: So we are going to go into some next-level nerd territory today and talk a little bit about keyboard-only accessibility. Exciting stuff. Recently, we redesigned our own website and in the process of doing that, and in collaborating together on balancing user experience and visual design and thinking about accessibility, we learned quite a bit. Both of us.
Annette: We did.
Mahtab: One of the things that was an area of learning that was new to both of us was around keyboard-only accessibility. How it started was our developer initially presented a design solution that we both had a really negative reaction to from how it looked visually. And we were like, why is this here? It doesn't look right. It looks like an error. He was like, it's an accessibility requirement and once we dug in, we really learned a lot about focus accessibility and what it is.
Annette: Yeah. It met the needs of accessibility but you and I both had this reaction. Full disclosure, we've been at this industry for 20 years so we had a flashback of - you remember back in the day when you had that dotted border around things that you click? So it brought us back to that old time of like, what's going on here?
Mahtab: It felt like a really old website and sort of like, is this an error? I kept thinking it was a coding error of some kind. But before we get to further into that, let's talk a little bit quickly about accessibility design and what that means. So how would you define accessibility?
Annette: When you design for accessibility, it really means you're designing for all people. You want to make it accessible to all people. So that's making sure that the user can navigate through the content, clearly read the content, making sure you have contrast. Being mindful of all those things. That's both the visual components as well as the techie backend stuff.
Mahtab: Absolutely. So you need to ensure that whether someone is using a standard input device like a mouse or keyboard or whether they're using something to assist them like a screen reader, magnification software, speech recognition software- that they can interact with your site.
Annette: Yes. They can use the site.
Mahtab: One of the ways that people interact with sites is by not using a mouse at all, by using keyboards only.
Annette: How does that user navigate the site? First, let's talk about what the typical user may be that is a keyboard only user. You'd be surprised. More people than you think are controlling navigation via the keyboard.
Some users that we can think of are users that may have tremors, they don't have that fine muscle control of their hand, so it'd be very difficult to use the mouse. This is common in the condition like Parkinson's.
You can also think of the elderly population, limited dexterity. When you have arthritis in your hands, your fingers don't move, it'd be very hard to grip a mouse and have that fine control when you're navigating through a site.
So it's much easier to use a keyboard. Alternately to using a mouse when you're moving around and clicking on things, how does the user navigate via the keyboard? They're using the tab key.
Mahtab: Yes. So you need to really be aware of how you're designing the site so as to not cause hardship or issues for them to make their way through the site to the information that they need.
Some of the most common issues and things that can go wrong are your navigation becomes too lengthy and just impossible to get through. The order of your navigation is poorly thought-out, doesn't make sense, makes it confusing. Or what we're going to talk about is focus indicators: you don't have them or they're poorly executed.
Annette: How is the user going to know where they are as they're tabbing through?
Mahtab: Let's talk about that. You just did a really good job of defining it, is as they're tabbing through, what's telling them about where they are? And that visual cue, that's the focus indicator.
Mahtab: As you're tabbing.
Annette: Just as a point of clarification, what they are tabbing through is the interactive elements of the site. So it helps them navigate through the navigation, the menu at the top, links, buttons, things like expandable menus, accordions and then input fields.
Mahtab: Yes. And when we say interactive elements by that also in addition to defining all the ones you did is think anything that they could then hit the enter key and it would take them somewhere.
Mahtab: So it's not just taking them from visual element to visual element, it's actually the things that help them move to the next place.
Annette: Yep. It's in lieu of that mouse click that helps them advance. It's a tab enter to navigate.
Mahtab: Yes. So sighted users who are doing this are going to be given a visual indicator and that's the focus element. So let's talk about how we create that focus element. So this was something that we learned.
We didn't realize that by default, a browser actually shows a focus state and the default focus state is an outline around an object that is keyboard focusable. So like a link or navigation element. But the reason we didn't know that is because it's apparently very popular to override it.
Annette: Yes. It's very common to override it and you're overriding this in the CSS file. So the main class that we're talking about here is the dot focus class. Often times when it is overwritten, it'll be outline = none or "outline:none" and then that causes the default state to go away. Then since you haven't specified a state, there is no visual cue to the users of what the focus state is.
Mahtab: So if you're looking on a website and you're tabbing and you're not seeing anything-
Annette: They overrode it!
Mahtab: If you went to view source, likelihood you'd see that, that outline is none.
Annette: So we wanted to dig a little deeper. So when we had first started QAing the site, going live with the site, we had a very chunky dark-gray outline around each of them. It met the needs of accessibility, which was fantastic, that was important. However,-
Mahtab: However, it was ugly.
Annette: It was ugly, right?
Annette: So we feel that you can find a balance in both being visually appealing and meeting the needs of accessibility. So we dug a little deeper...
Mahtab: Just really exploring our options to see what can we do that both meets the need of clearly letting the user know their focus while not looking like an icky “you are here” arrow has been put -like slapped- on top of the design.
Annette: Yes. Where we ultimately landed was doing something actually pretty similar to what the default browser is and choosing that one because it's familiar to users, but then we also modified it to make it work best for our site.
So some of the things that we did do is we enlarged the size of that border, we made it have more weight, we also made sure that the color that we chose passed contrast, that's another important part of accessibility. Then we also chose to remain consistent. So you can do unique focus elements per the type of...
Mahtab: You could do a different focus element for navigation versus links, that type of thing and we just decided to go with one. And we talk about this all the time. So many of the things that you do with visual design is you're speaking a language and the user is learning that language when they learn your site. We talk about link language, they can expect that links always look the same or navigable elements look the same so that they know-
Annette: Or you select the color and they're cued into that color.
Mahtab: Yes. Similarly, we didn't want to now introduce five new words into the language in the first site but just have a single consistent one that says this is your focus area.
Annette: So we tested a full page that had multiple interactive elements and we did find when we use the unique element for each of them, your eye couldn't track properly. Even though we had a focus state, it was shifting and it was different. So ultimately going with a consistent one, it was very easy for the eye to track through all the different interactive elements on the page.
Mahtab: That was after, as Annette said, we've been doing this for a while. And after doing this for a while, it's both surprising and also really fun to find something new to learn.
Annette: Yeah, there's new challenges.
Mahtab: So that was a really fun one for us that we wanted to share that we really enjoyed digging into, learning about, playing around with, coming up with a solution for our site and really look forward to applying what we learned on our own site now to the experiences that we're designing.
Annette: Yes, absolutely.
Mahtab: So I think that is what we wanted to say about focus points and accessibility. So thank you for listening. You can subscribe or find us on SoundCloud, iTunes, Google Play, Stitcher as well as on our website, cruxcollaborative.com/cruxofit
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494064.21/warc/CC-MAIN-20200329074745-20200329104745-00477.warc.gz
|
CC-MAIN-2020-16
| 9,189
| 45
|
https://g4v.dev/2004/01/29/mtljpost/
|
code
|
I have now finished version 1.0 of MTLJPost.
Yea, this was after Kristian told me how he did it, I figured I could simplified things. And I did.
Install LJ::Simple library, put .pl file in your Moveable Types MT Directory.
Edit the .pl file and set:
LJ_USER - Your username on livejournal
LJ_PASS - Password, obvious
SITE_NAME - Your Sites Name
SITE_LINK - Your Sites Link
I think thats about it...
I'm a tinker, maker, and software developer.
At home I code, game, hang out, all the cool non robot things to do.
Heavily involved with Jenkins open source, and will often submit PRs to random other projects.
I also play games, both board and video games and love to read.
You can usually find me on various services as halkeye.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358847.80/warc/CC-MAIN-20211129225145-20211130015145-00498.warc.gz
|
CC-MAIN-2021-49
| 727
| 14
|
https://pypi.org/project/pyleafarea/
|
code
|
Automated Leaf Area Calculator Using Tkinter and Deep Learning Image Classification Model.
Updated. ComputerVison and Machine Learning: Convolutional Neural Network to compute leaf area
This project was a free lancing project for School of Biological Science at Washington State University. The goal of the project was to detect number of pixels that would potentially consitute a leaf. These 'green' pixels fraction as compared to a known quantity computes the area (cm^2). This constitues to a classification problem over image object.
- Train CNN model to learn various levels of green, red and arbitrary pixels. [Note any other color except green and red is of no particular interest to us, hence classified as arbitrary.
- Break the image as list of pixels and classify them in a supervised training fashion. [Select pure green leaf images placed in 'green' folder, pure 'red' images in 'red' folder, etc]
- Compile and save the model that tests with an acceptable accuracy. [Accuracy obtained: 95 %].
- Knowing that the number of pixels in every image is 4 cm^2, we computed the leaf area as number of green pixels ~ number of red pixels [whose areas is known].
The project was implemeneted using Python Keras/Tensorflow libraries and OpenCV/color packages. A substantial advantage of this program was, due to the state of the art CNN machine learning algorithm used, leaf object with varied intensities of green were classfied with supreme accuracy. The raw images were generated through different sources, scanner and DSLR camera, but due to the algorithm logic, these differences were incorporated and computation did not result in loss of records.
This algorithm has been successfully run on atleast 3500+ images and has helped the research lab save valuable time and energy otherwise invested in manual efforts.
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Hashes for pyleafarea-2.3.1-py3-none-any.whl
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649439.65/warc/CC-MAIN-20230604025306-20230604055306-00094.warc.gz
|
CC-MAIN-2023-23
| 2,027
| 12
|
https://www.freelists.org/post/kismac/Wanted-Digest-Vol-2-11
|
code
|
I did not receive a copy of #11 of the digest from a few days ago. Did
anyone else get one, or was this number skipped. I did receive two copies of
#10 and then one of #12 and #13, but no #11. If there was a #11, can someone
please email it to me.
Thanks in advance.
Find high-speed ?net deals ? comparison-shop your local providers here. https://broadband.msn.com
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424931.1/warc/CC-MAIN-20170724222306-20170725002306-00061.warc.gz
|
CC-MAIN-2017-30
| 364
| 6
|
http://freecode.com/users/rupper
|
code
|
CGI_UTILS is a set of three C++ classes: CGI, Template, and Session. CGI wraps the CGI protocol. Template provides an easy way to use templates in your CGI applications. It knows about variables and datasets (tables). Session provides the ability to pass data between your programs through shared memory.
ocicpplib is a C++ library to communicate with Oracle RDBMS through OCI. It features a JDBC-like interface to Oracle. The goal of the OCI C++ Library is to provide a simple interface to Oracle. It features support for Oracle 8 and 8i, BLOB/CLOB support, ROWID, REFCursor's and Nested Tables.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121778.66/warc/CC-MAIN-20170423031201-00252-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 596
| 2
|
http://askubuntu.com/questions/257453/ubuntu-12-04-hangs-as-splash-screen-on-boot
|
code
|
I just dual-booted Windows 7 and Ubuntu. Everything worked, and I've been able to switch between them. When running Ubuntu, I've always had it stuck at the splash screen (Ubuntu sign with the dots) or at a purple screen, but waiting or pressing Esc has always worked, while recently it hasn't. I can't get past the booting, so I can't get to the desktop.
Please note I'm new to Linux and don't know much, other than how to get to the "grub" screen - and now I can't even get past the grub screen, as soon as it boots it goes straight to that.
Can anyone help?
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163857566/warc/CC-MAIN-20131204133057-00059-ip-10-33-133-15.ec2.internal.warc.gz
|
CC-MAIN-2013-48
| 559
| 3
|
https://www.mysciencework.com/publication/show/06fdcd6ac88481b30917645ccccaa9d6
|
code
|
We use a panel cointegration model with multiple time- varying individual effects to control for the missing factors in the credit spread puzzle. Our model specification enables as to capture the unobserved dynamics of the systematic risk premia in the bond market. In order to estimate the dimensionality of the hidden risk factors jointly with the model parameters, we rely on a modified version of the iterated least squares method proposed by Bai, Kao, and Ng (2009). Our result confirms the presence of four common risk components affecting the U.S. corporate bonds during the period between September 2006 and March 2008. However, one single risk factor is sufficient to describe the data for all time periods prior to mid July 2007 when the subprime crisis was detected in the financial market. The dimensionality of the unobserved risk components therefore seems to reflect the degree of difficulty to diversify the individual bond risks.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687447.54/warc/CC-MAIN-20170920194628-20170920214628-00291.warc.gz
|
CC-MAIN-2017-39
| 946
| 1
|
https://gitlab.gnome.org/GNOME/mutter/-/issues/860
|
code
|
Sometimes windows are blurry after unmaximation or workspace switching
arch linux, gnome shell pkg ver 1:3.34.1-1, mutter pkg ver 3.34.1-1. I use Xorg. I have two extensions in gnome-shell:
- https://github.com/TomaszGasior/gnome-shell-user-stylesheet with https://github.com/TomaszGasior/my-gnome-settings/blob/1e32b3e7825f46c658fa2f77c6980656b41873b9/gnome-shell.css
It seems to me that the problem is related to decorations. After unmaximation window content (in content I mean the part of window excluding shadows) is sized/positioned as the whole window (including shadows). From user perspective, left window edge (where I can move my mouse to resize the window) is moved to the content of the window — it does not match the left edge of decoration. Unfortunately, I don't remember whether I was able to resize the window.
I can reproduce this problem witch each CSD application (Firefox, Firefox Developer, Sublime Text*, gedit). It seems to me that I am also reproduce this with SSD app (dbeaver as example). I am not sure whether it is possible to reproduce this with bare mutter (without gnome shell) and since this bug is rare I don't have time to do it, sorry.
* — Sublime does not use CSD itself but I am forcing CSD with GTK_CSD=1 to get dark window titlebar.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623942.48/warc/CC-MAIN-20210616124819-20210616154819-00398.warc.gz
|
CC-MAIN-2021-25
| 1,277
| 6
|
https://lists.pagure.io/archives/list/python-devel@lists.fedoraproject.org/2008/12/
|
code
|
On Thu, 10 Apr 2008 23:49:18 +0200, Gael Varoquaux wrote:
>On Thu, Apr 10, 2008 at 01:43:34PM -0800, Jeff Spaleta wrote:
>> > The tarballs of each released packages are located on
> >> http://code.enthought.com/enstaller/eggs/source/
> >> You do need a dependency graph to be able to make some sense of this. I
> >> am making good progress on making a nice one.
>> Just for clarification... the eggs are completely source...and don't
>> contain binary blobs of any sort?
>Yes. I am not an Enthought employee. I have no financial interest or
>other interest than promoting high-quality open-source scientific
I am packaging mayavi2
My current mayavi spec state is:
http://rakesh.fedorapeople.org/misc/python-mayavi2.spec It does not
build, but I am working on it and would love help or patches. ;)
I have already filed python-Traits:
There is an issue probably with mesa libraries which prevents
importing vtk. So, if you try building mayavi2 on F10, disable selinux
until this bug is resolved.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100381.14/warc/CC-MAIN-20231202073445-20231202103445-00279.warc.gz
|
CC-MAIN-2023-50
| 993
| 18
|
http://harrisburg.psu.edu/calendar/event/psi-chi-presents-free-yoga-classes-mondays
|
code
|
Free Yoga Classes on Mondays
This event information was created Jan 27 2014 - 8:24pm, updated Apr 4 2014 - 2:30pm
Capital Union Building (CUB)
Bring your own yoga mat and release some stress in our class. Hope to see you there!
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997886087.7/warc/CC-MAIN-20140722025806-00038-ip-10-33-131-23.ec2.internal.warc.gz
|
CC-MAIN-2014-23
| 227
| 4
|
https://www.zaptor.co.za/products/a-triangular-scale-ruler
|
code
|
Triangular Scale Ruler (1:20.1:25.1:50.1:75,1:100,1:125)
Did you know?
We offer a 1-year guarantee on all our products, guaranteeing that they are free from defect and work in the printers promised.
Find out more here
A Triangular Scale Ruler (Red and Green)
Scales : 1:20, 1:25, 1:50, 1:75, 1:100, 1:125
Product code : RUL0040
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891624.95/warc/CC-MAIN-20201026175019-20201026205019-00661.warc.gz
|
CC-MAIN-2020-45
| 327
| 7
|
https://www.dataxstream.com/2009/07/sap-abap-performance-tuning-high-cpu-utilization-low-db-activity/
|
code
|
So I have been working on a list of blog topics. I actually have an outline that is 3 pages long, but I am not going to blog on any of the topics on the outline because my experience this week at the client site I think was useful and relevant. We are at crunch time. A lot of things have stacked up and as usual we are working to a very aggressive deadline. In recent years I have typically held many leadership roles on SAP projects. Usually heading up the technical team, working as the project architect or serving as the overall project manager for the SAP project. In my current roll I heading up the Basis Team in a very hands on role. I have been doing my best not to interfere with the development team. The development team is headed up by a good friend of mine and a very talented senior SAP consultant. This week we have had a number of SAP performance defects logged. Upon a quick initial analysis it was clear that the SAP performance issues were not Basis related. The development team is currently buried in the perfect storm of conversions finally coming together and the test team finally kicking the tires on the rest of the delivered development interface objects. This gave me the opportunity to pitch in get my hands dirty and look at some code and performance issues. We knocked off a significant SAP ABAP performance issue with a 100 fold improvement when all was said and done. Looking back on the problem it was obvious to me, but I missed a couple of clues that should have pointed me to the problem even sooner. This is what this blog post is about.
I started off staging a test to look at a performance issue related to an SAP customer load and ended up diagnosing a custom SAP invoice interface problem with Vistex. We had staged a test to look at why SAP V2 update processes where significantly bottle necking when the customer load program ran. Just before I pulled the trigger on the test. I decided to fire up Topas on the AIX to look at how the SAP server was running. That is when I saw something very interest. A custom developed SAP invoice interface program had been running for nearly 16 hours and was consuming nearly 1/3 of the CPU capacity of the instance (3 Cores – 4 and change GHz processors). I did a little checking and we were processing a file with approximately 47,000 invoices. Now that just ain’t right! You should be able to process 100’s of thousands of invoices an hour on an SAP instances configured as we are configured.
It was a programming problem and just using Topas, I should have been able to predict the problem and known where to look. I will explain why I should have known where the problem was and then go into how I actually figured out the issue. The smoking gun should have been the following clues. The SAP server we are testing on is a 2 box instance, separate SAP CI and DB. Topas was reporting one SAP batch process on the CI was consuming 1/3 of the CPU capacity on the box, that process was consuming a pretty normal amount of memory and was stable. The SAP DB server throughout the process was under no visible load. There was also no real network activity to be measured. The interface program reads a file, formats the data and passes the data to a standard SAP posting function. We were posting an invoice approximately once every 2 seconds.
Given just the above information what do we know? Experience tells me that my server should be sized to handle hundreds of thousands of invoices an hour, we are processing 2 per second. My process is consuming significant CPU. My process is stable from a memory perspective. My process is not hitting the DB. What is my process doing? It is crunching though ABAP. More specifically it is crunch through ABAP and it is not doing DB operations.
So what was the problem? We had 3 nested loops. The outer loop had 50k rows, the next loop had 150k rows, and the final loop had 800k rows. That is a lot of looping. In testing the developer used small files maybe 20 invoices max in the tests. While the code was still broken during unit testing it would not be visible with small test files. So how did I go about finding the problem? I kicked off a SQL trace using ST01 and could see that for each invoice we had about 2000 microseconds of operations. I then reviewed the code, selection screen, file import, and then processing. It took less than a minute to import the file and start the processing routine. I commented out the call to the posting function module (AKA the heavy lifting) and replaced it with a write statement. It was taking approximately 2 seconds per write still. The problem therefore lied in the routine that formatted the data from the file to present it to the posting function. As soon as I opened that part of the code it was obvious we had nested loops and looking at the counts on the internal tables it was pretty clear why it was running so long.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738944.95/warc/CC-MAIN-20200812200445-20200812230445-00183.warc.gz
|
CC-MAIN-2020-34
| 4,902
| 5
|
https://github.com/mooped/MoopEngine-sprLite/tree/d8919f18432d4dd29718077656c5dbbb0a81b146
|
code
|
HTTPS clone URL
Subversion checkout URL
Lite sprite only game engine.
C++ C Shell
Fetching latest commit...
Cannot retrieve the latest commit at this time.
|Failed to load latest commit information.|
MoopEngine sprLite ------------------ A lightweight sprite engine created for Ludum Dare #18 and hopefully future projects. Currently supporting Windows and Mac OS X, 32 or 64 bit. Linux support could be added with a little effort. Please let me know if you decide to use it for anything. email@example.com
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450659.10/warc/CC-MAIN-20151124205410-00085-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 506
| 8
|
https://www.generatormix.com/random-consonant-generator
|
code
|
Random Consonant Generator
To generate a random consonant select the options below and click generate.
What is this tool?
A consonant is a letter which requires the speaker to at least partially close their vocal tract. In the English alphabet, there are 21 consonants, which you can pick out at random using this tool.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144111.17/warc/CC-MAIN-20200219092153-20200219122153-00287.warc.gz
|
CC-MAIN-2020-10
| 319
| 4
|
https://www.darrinbishop.com/blog/2013/01/whats-new-in-mobile-for-sharepoint-2013-part-1/
|
code
|
SharePoint 2010 support for mobile devices have been limited. Simple mobile views of lists and libraries were supported by mobile browsers using a mobile redirection feature. SharePoint 2013 steps up the support for mobile devices with features to support page design, content creation and remote API improvements.
This post will be the first of two post and will provide an overview of the SharePoint 2013 features that support page viewing, page design and content creation for mobile devices. The second post will provide an overview of SharePoint 2013 mobile programming. Later posts will provide an in-depth look at each mobile feature.
Mobile views existing in SharePoint 2010 and have been enhanced in SharePoint 2013. Mobile views are provide a default look and feel to support basic SharePoint team site templates in a mobile browser. Mobile views provide mobile access to SharePoint lists and libraries. Mobile views are created using a page redirect to a mobile enable page. SharePoint relies on user-agent string to Site administrators can enable or disable mobile views with the Mobile Browser View feature which is enabled by default for team site, blank site, document workspace, document center and project site templates. Mobile views support basic browsing of the common list and library pages.
There are two mobile views and an optional PC View of full screen view. Older, non-HTML 5 mobile browsers display in classic mode. HTML 5 browsers display in contemporary mode. Views provide a link to display the PC View or full screen view.
Mobile Views ( Classic, Contemporary Full)
In general when Mobile Browser View is not activated all devices will attempt to display in full screen mode which may or may not be pretty. When the Mobile Browser View feature is activated mobile devices based on the User Agent string will be redirected to a mobile page and display either the classic or contemporary view. A link in the Contemporary view will allow a view to switch to a full screen view.
Device Channels are a new publishing feature in SharePoint 2013. Device channels allow the site administrator to define a group of devices based on User-Agent header value. Device Channels alone do not change the page rendering. Device channels are the means to “tag” a request as belonging to a specific criteria. SharePoint defines a Default device channel. Site administrators can create more device channels based on their requirements. Device channels group devices by User-Agent string allowing the site administrator to create channels that tag multiple device types in the same channel. For example a site administrator could define a Device Channel for HTML 5 phones and include Windows Phone, IOS and Android-specific agent strings for HTML-5 compatible devices. Below is a example of three custom defined channels.
Device Channels really do not do much outside of defining categories of devices. Other SharePoint features work with Device Channels to change the look and feel or content of a page. Master pages can be changed based on the device channel. This is supported out of the box. Each channel can be associated with a specific Master page. During page assembly the device channel is determined and the appropriate Master page is used. This allows each device channel to have a unique look and feel. The below image on the left shows the page viewed from Windows Phone 7. The image to the right is the same page viewed in Internet Explorer. Only a device in the WinPhone Device Channel will display the Windows Phone logo because the logo is defined in the WindowsPhone7 custom Master page and assocatied to the WP7 device with a Device Channel.
Mobile Panel and Device Channel Controls
Device channels and master pages can change the look and feel of a Publishing site but it doesn’t do much for content. SharePoint has two new controls to manage content based on a channel, Mobile Channel Control and Device Channel Control So far there seems to be little difference between the two controls. Each control is added to a page at design time and includes an attribute to associate the control to the channel. If the active channel matches the set value of the control then the content is rendered. So far there seems to be little difference between choosing the Mobile Channel Control and the Device Channel Control. Future blog posts will dig deeper into the two controls to determine value of one over the other.
Below is an example page markup that displays text for the Windows Phone channel using the Mobile control. The contained text will display when the device is in the WinPhone channel.
<PublishingWebControls:MobilePanel runat=”server” IncludedChannels=”WinPhone”>
<p>This is viewed using a Windows Phone Device</p>
The last feature covered in this first post is Image Renditions. Image Renditions are not necessarily a “Mobile” feature but mobile page design can certainly make use of renditions. Image Renditions are generic standards for rendition sizes of an image, they are not specific for any one image. Using Image renditions a site administrator can define a collection of standard height and width for an image which can be used by content creators when defining or selecting an image. For example a “Large Banner” can be defined as 400 pixels wide and 200 pixels high. A “Small Banner” can be defined as 100 pixels wide and 25 pixels high. Image Renditions are accessed via Site Settings->Look and Feel. Blob Cache must be configured before the Image Renditions can be used.
During the design or content creation process image can be associated with a rendition. The benefit of associating an image with a specific rendition is that the image is rendered to the appropriate dimensions and place into the Blob cache. Instead of retrieving a large image and scaling it appropriately in the browser the image is scaled on the server and placed in the cache. In the mobile world where most data is transferred via a cell connection renditions can save time and money.
Associating an image in SharePoint 2013 with a specific rendition is supported with the image controls. Below shows the dialog to edit an image on a publishing page. Content creators can select the appropriate image rendition.
The selected rendition can also be managed using the Ribbon.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100264.9/warc/CC-MAIN-20231201021234-20231201051234-00651.warc.gz
|
CC-MAIN-2023-50
| 6,326
| 17
|
https://daliynews45.com/how-can-game-design-courses-help-you-improve-your-skills/
|
code
|
If you’re looking to improve your game development skills, whether you want to become a professional developer or simply hone your hobby into something more serious, then game design courses in Ahmedabad are the best way to do it.
There’s no substitute for hard work and practice when it comes to becoming an expert in any field. However, there are certain traits and skills. That can be learned through formal education that will improve your chances of success. Once you enter real-world employment.
Understand game design theory
In order to truly understand game design, you need to have a working understanding of how games are made. You should know the different stages of game development and what people do in each stage. You should be able to identify the roles involved in designing a game and their responsibilities during its development.
In addition, you need to learn about various theories and principles that help guide your decision-making process when making games. These theories can either be practical or theoretical (or both).
- Game Design Courses to develop a strong foundation
Game design courses in Ahmedabad can help you to develop a strong foundation in game development. You will learn how to make games and how to become the best game designer. The skills and knowledge that you learn through these courses will help you create your own games right away.
You can enroll in these courses at any level; there are classes for beginners as well as advanced students. Some of them have even been designed specifically for professional developers who want to improve their skills and make better games by learning from others’ experiences.
Learn the tools of the trade
The tools of the trade are a vital part of game development. You can’t create anything without knowing how to use the right tools, and that’s why it’s important to learn them as early as possible.
You can start by learning the basics of each tool, such as how to use them and how they work. You should also learn how to create a game prototype using these tools. Because it will help you get a feel for what it takes to create an entire game.
Get to grips with the basics of Unity, Unreal Engine and more.
Game engines are software tools used by professional game developers to create games. Unity and Unreal Engine are the most popular game engines, but there are many more out there.
Typically, a game engine will include an editor that allows you to build your game using pre-defined components and resources. You can also use it to add custom code or scripts if needed. Some of these programs even ship with source code so that you can modify them as needed.
Understand the theory behind programming logic
Have you ever wondered why programming languages work the way they do? If so, you’re not alone. Many developers have tried to understand how logic is applied in programming and have often wondered how it can be used to design games.
In this course, you will learn about the theory behind programming logic and how it can be applied to solve problems with code. You will also learn about the theory behind game design and how it can be used by developers when designing new games.
Finally, engaging with the Arena Animation Ahmedabad course teaches students how to apply both theories together in real-world situations with real-life examples of what has worked for other game developers before them.
Game design is an art form that has been around for decades and will continue to grow from strength to strength. This article has given a brief overview of some of the most important things you should know if you want to make games in this exciting industry.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817033.56/warc/CC-MAIN-20240415205332-20240415235332-00098.warc.gz
|
CC-MAIN-2024-18
| 3,674
| 19
|
http://enrevanche.blogspot.com/2005/04/donate-your-old-cellphones.html
|
code
|
Around here you can give those excess cell phones to the YW (not YM) CA and they will re-program them to dial 911 only. They distribute them to women in precarious situations (e.g. domestic violence). Perhaps your local YW does something similar.New York City has such a program, and it's coordinated through Verizon Wireless. This weekend, I'm going to go down to the local Verizon Store with a box of old phones collected from our apartment building (I am sure that there are other folks in my building with old cells lying around.)
Your community probably has such a program.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541157498.50/warc/CC-MAIN-20191214122253-20191214150253-00125.warc.gz
|
CC-MAIN-2019-51
| 578
| 2
|
https://laforge.gnumonks.org/blog/index-36.html
|
code
|
During the last week or so, I've been spending way too much time implementing
the network-side GPRS protocol stack as part of an effort to not only provide
GSM voice + SMS but also GPRS+EDGE data services with OpenBSC
GPRS is fundamentally very different from the classic circuit-switched domain
of voice calls and CSD (circuit switched data). Not only conceptually and on
the protocol level, but also in the actual system architecture. They way it
was added on top of the existing GSM spec is by making no modification to the
BSC and MSC, and only the minimal necessary modifications to the BTS. They
then added a new Gb interface to the BTS, and the SGSN and GGSN core network
components, who in turn talk to HLR/VLR/AUC.
So in the most primitive GPRS network, you can have the GSM and GPRS domains
completely independent, only using the same databases for subscriber records
and authentication keys. This goes to the extreme end that your phone would
actually independently register with the GSM network (ISMI ATTACH / LOCATION
UPDATING) and to the GPRS network (GPRS ATTACH / ROUTING AREA UPDATE). While
both of the requests get sent to the same BTS, the BTS will send the GSM part
to the BSC (and successively MSC), and the GPRS part to the SGSN.
Also, the actual software architecture looks completely different. In the GSM
circuit-switched domain you always have a dedicated channel when you talk to a
phone. The number of dedicated channels is limited by the transceiver capacity
and the channel configuration. In OpenBSC I chose to simply attach a lot of state
to the data structure representing such a dedicated channel. In the
packet-switched domain this obviously no longer works. Many phones can and
will use the same on-air timeslot and there is no fixed limit on how many
phones can share a radio resource.
What's further important to note: The protocol stack is very deep. If you look
at the GPRS related output on an ip.access nanoBTS while your mobile phone makes
a HTTP request, the stack is something like
HTTP-TCP-IP-PPP-SNDCP-LLC-BSSGP-NS-UDP-IP-Ethernet, while the first
HTTP-TCP-IP-PPP is obvious, I would not have expected that many layers on the
underlying network. Especailly if you look at the almost zero functionality
that NS (GSM TS 08.16) seems to add to this stack. Also, the headers within
the protocol can actually be quite big. If we only count the number of bytes
between the two IP layers in this stack: 8 bytes UDP, 4 bytes NS, 20 bytes
BSSGP, 6 bytes LLC and 4 byte SNDCP. That's a total of 42 extra bytes. And
that for every small packet like TCP SYN, SYN/ACK or the like! No wonder
that mobile data plans have been prohibitively expensive all those years ;)
So with regard to the actual GPRS implementation in OpenBSC, the following
things had (or still have) to be done
- Add support for generating System Information 3 + 4 rest octets and System Information 13
This is a very time-consuming bit-fucking experience, encoded relative to the padding pattern of 0x2b. Without this, the phones would not realize that the cell actually supports GPRS. DONE.
- Add support for the ip.access extensions to the A-bis OML (TS 12.21) layer
This is needed to configure the GPRS parameters such as channel configuration, coding schemes or the IP and NS/BSSGP parameters for the link to the SGSN (OpenBSC). Without it, the BTS would not even start to speak NS/BSSGP, i.e. not connect to OpenBSC for GPRS services. DONE.
- Implement the NS protocol (GSM TS 08.16)
Turns out this was really simple, as NS doesn't really do much anyway. DONE.
- Implement the BSSGP protocol (GSM TS 08.18)
This protocol is - among other things - responsible for the flow control. Both globally for the
BTS as well as individually for each MS. I've implemented the basic functionality to be able to
send/receive signalling and user data, but no flow control yet.
- Implement the LLC protocol (GSM TS 04.64)
This is actually the protocol that is terminated between the MS and the SGSN, so we have moved
beyond the BTS level here. Actual data from/to the mobile phone. I've implemented a minimal subset
of it, including the CRC24 checksumming. I'm not taking care of packet loss,
retransmissions or fragmentation yet. Just simple S, UI or U frames.
- Implement the GPRS mobility management (GSM TS 04.08)
This is pretty much work in progress, but GPRS ATTACH and ROUTING AREA
UPDATE is already handled. More work needed here, especially with regard to
persistent storage of P-TMSI allocations as well as the last-seen position of every MS
in a database.
- Implement the GPRS session management (GSM TS 04.08)
This is the messages for activating and de-activating PDP contexts. Work has not started yet.
- Implement GGSN functionality (PPP tunnel endpoints
After all, we need to terminate the PPP sessions that the phones establish somewhere. Work has not started yet
Once all that full stack has reached a level where it works to a minimal
extent, issues like BSSGP flow-control as well as LLC re-transmission,
fragmentation and [selective] acknowledgement have to be dealt with.
Finally, if somebody is bored enough, he could also work on things like combined
GSM/GPRS attach, or SMS over GPRS.
As you can see, it's quite a large task. But we need to start somewhere, and a
lot of this will still be needed when moving into the 3G and 3.5G domain. Even
if not at the lower level protocols, but from the software architecture point.
If you're into communications protocol development and don't mind our ascetic
'plain old C language' approach and are interested to contribute, feel free to
introduce yourself on the OpenBSC mailing list.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644817.32/warc/CC-MAIN-20230529074001-20230529104001-00757.warc.gz
|
CC-MAIN-2023-23
| 5,643
| 74
|
https://ask.metafilter.com/9039/Is-there-a-way-in-Movable-Type-to-view-and-delete-comment-by-comment
|
code
|
Is there a way in Movable Type to view and delete comment by comment?
July 29, 2004 3:18 PM Subscribe
Could be a stupid question here- thought I had good comment spam protection, but alas, overnight I got hit with 3,300 comments. Is there a way in MT to view and delete comment by comment, is there a plug-in for such a thing, is there an end to the madness?! Thanks.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988828.76/warc/CC-MAIN-20210507211141-20210508001141-00515.warc.gz
|
CC-MAIN-2021-21
| 367
| 3
|
https://cashlr.co.uk/pure-language-processing-in-finance-acing-digitization-recreation.html
|
code
|
Pure language processing in finance can extract and analyze unstructured information through the use of OCR, sentiment evaluation, named entity recognition, and matter modeling functions
With the rising digital funds being made throughout the globe how can monetary organizations guarantee most gross sales conversion and fee acceptance, in addition to reduce danger publicity? Sounds alarming?
Within the finance trade that’s extremely reliant on information processing and knowledge sustaining a marginal edge and understanding the pure nuance of consumers to offer on-time decision requires AI-related know-how.
As per Gartner, AI applied sciences like Pure Language Processing (NLP) are gaining traction from companies to create new merchandise, enhance present merchandise and improve buyer base.
This fast evolution is pushed by two components: Firstly, having grown accustomed to digital assistants like Siri of their every day lives, clients settle for the identical of their office. Second, as we speak NLP is not reliant on rules-based processes, with Machine studying, NLP permits larger scalability and accuracy. Let’s have an in-depth take a look at pure language processing and the way NLP is a sure-shot resolution to make data-driven choices in real-time.
Pure Language Processing is a subset of synthetic applied sciences that use machine studying algorithms to allow computer systems to know, interpret, and comprehend the pure nuance of human context.
Organizations utilizing chatbots and digital help can leverage NLP to mine insights from the huge quantity of knowledge and perceive the consumer’s pure language question inputs. NLP for monetary paperwork is usually a game-changer.
For instance, monetary professionals spend a variety of time studying monetary press, analyst studies, and different sources of knowledge. NLP might help design a system that may make knowledgeable choices in real-time by changing unstructured information in textual format with minimal human intervention.
High Purposes of Pure Language Processing in Finance
Nevertheless, there may be a variety of NLP functions however a few of them stand to learn essentially the most within the finance sector. Let’s dig in-
Optical Character Recognition (OCR)
In monetary organizations, coping with a pile of knowledge is a standard prevalence. Company filings, Analysis And analytics studies, quarter income transcripts are among the monetary paperwork that monetary analysts have to paddle via.
The piling of unstructured information (ex- pdf, e mail, photos, textual content,) makes the evaluation extra time-consuming and tedious. At this juncture, optical character recognition lets you convert unstructured monetary datasets right into a digestible format to be fed into the NLP pipeline for additional evaluation.
Beneficial buyer expertise is paramount to any monetary group. Nevertheless, through the use of Conversational AI chatbots monetary establishments can control the voice of the client.
However, the underlying sentiment behind a buyer’s voice can solely be decided by sentiment evaluation.
Sentiment evaluation algorithms detect buyer ache factors and their emotion quotients, permitting the monetary establishment to design the coverage and providers as per buyer curiosity.
Over time, this info will be consolidated to supply personalised monetary services and products to clients.
Named Entity Recognition
No matter which organizations a buyer interacts with, information privateness and safety are high considerations. And the monetary trade is jam-packed with the processes like credit score danger administration, underwriting, and mortgage disbursal that require big human effort and fraud prevention.
Utilizing named entity recognition allow the finance sector to transcend the sentiment evaluation, and detect real-life ideas like a particular individual, firm title, location, group, and others.
By accumulating the extracted info NLP datasets can simply evaluate the client info of their database and create an alert if it detects fraud and cash laundering.
Attributable to irregular non-classified information, and seasonal differences, predicting time sequence for monetary evaluation is an advanced job.
Nevertheless, the machine learning-enabled matter modeling method can present semantically structured information by classifying frequent phrases and phrases and grouping them for straightforward monetary evaluation and advertising choices.
That’s how NLP supplies exact workflow automation for the monetary supervisor in decrease turnaround time.
Way forward for NLP in Finance
With NLP in finance, the long run is brilliant, and your monetary establishment needs to be too. From taking up the mundane and repetitive duties to offering strong monetary evaluation help, NLP permits finance organizations to successfully guarantee regulatory compliance and gained elevated market insights. It’s excessive time for monetary organizations to make a transition from sensible to smarter. As a result of the longer you procrastinate, the quicker you lose the sport.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103626162.35/warc/CC-MAIN-20220629084939-20220629114939-00347.warc.gz
|
CC-MAIN-2022-27
| 5,103
| 26
|
https://www.stackforce.de/en/products/wmbus-to-lpwan-bridge
|
code
|
Use our WMBUS-to-LPWAN Bridge for bringing your existing networks up to date! Using the innovative Bridge will save you time and money!
OperAtion modes AND Features
Operation mode: eXtreme Low Power (XLP)
- Optimized battery lifetime management for up to 12 years
- Training mode for automatic data transmission interval synchronisation
Operation mode: General Purpose (GP)
- Forwarding of all meters in range, optional usage of filter for e.g. Manufacturer ID, version number, ...
- Alternatively forwarding of white listed devices
- USB power supply for continuous operation
Choose operation mode during configuration, reconfigure at any time.
- Configuration via USB or remote via LoRaWAN
- WMBus & LoRaWAN compliant encryption
- Supports OMS security profiles A and B
- Fragmentation of WMBus data for transport via LoRaWAN/Sigfox
- IP-rating IP67
- White label order possible
- Supported class A and C
Optional: Class B
- Supports specification v1.0.2b
Optional: v1.0.3 and v1.1
- Designed for region EU868
Optional: IN865, AS923, US915, AU915, KR920
- Activation by Personalization (ABP) or Over-The-Air-Activation (OTAA)
Wireless M-Bus Features:
- Supported specifications: EN 13757, OMS v3, OMS v4, ... and many more
- Supported the following Wireless M-Bus modes: S1, S1-m, S2, T1, T2, R2, C1 T-A, C2 T-A, C1 T-B, C2 T-B
- Optional: Bidirectional communication with Wireless M-Bus devices
STACKFORCE owns the famous, well-known and well-proven hardware independent Wireless M-Bus Stack.
STACKFORCE maintains the commonly best-known open source implementation of the LoRaWAN end node stack.
STACKFORCE is an experienced integrator of Sigfox.
What would be more obvious than to have STACKFORCE to combine Wireless M-Bus with LoRaWAN or Sigfox to create the WMBus-to-LPWAN Bridge?
Manage the WMBus-to-LPWAN Bridge remotely from within your IoT Cloud Backend:
- Add or remove meters to be considered for bridging.
- Set the encryption key to enable data filtering.
- Change configuration like heartbeat interval.
- Add, modify or remove data filters to optimize amount of data to be transmitted via LPWAN.
- Temporary switch to class C for enabling quick configuration.
All done remotely, from within your IoT Cloud Backend.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144637.88/warc/CC-MAIN-20200220035657-20200220065657-00230.warc.gz
|
CC-MAIN-2020-10
| 2,229
| 38
|
https://communities.bentley.com/products/projectwise/f/project-review-forum/182008/navigator-connect---need-a-viewer-for-dngs-and-look-like-v8i/662588
|
code
|
With V8I not being supported, is there a Bentley product that will help me with the following:
a small contradiction exists in your question: You are asking about DGN viewer, but in the text you mentioned redlining, which is not viewing, but extra functionality.
As you realized, Bentley Navigator V8i, which was always targeted primarily to 3D models and reviewing functionality (markups, issues...) and was based on MicroStation engine, has been discontinued. But still two options exist: Bentley View CONNECT Edition and Bentley Navigator CONNECT Edition.
For viewing (primarily 2D files) Bentley offers Bentley View, both in V8i and (after many complains from users) also in CE version. In relation to your post, the features are:
Bentley Navigator CONNECT Edition is based on completely different technology, which is not necessarily bad, but it offers different set of pros and cons:
Bentley Accredited Developer: iTwin Platform - AssociateLabyrinth Technology | dev.notes() | cad.point
Where can I find a link to download Bentley Navigator CONNECT Edition?
please respect the best practices and do not steal existing discussion with another question, especially when it's 2 years old. When you have any question, it's always better to ask in a new post, and when important, to share a link to former discussion.
bigroo said:Where can I find a link to download Bentley Navigator CONNECT Edition?
I think,because Navigator is discontinued product (replaced by cloud Design Review), it cannot be downloaded now.
The only Navigator I see at Software Downloads web is OpenRoads Navigator.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215790.65/warc/CC-MAIN-20220703043548-20220703073548-00558.warc.gz
|
CC-MAIN-2022-27
| 1,590
| 11
|
https://discuss.flyte.org/t/9029622/hi-slightly-smiling-face-we-are-trying-to-build-a-spark-task
|
code
|
Théo LACOUR02/09/2023, 2:26 PM
, and then use
to package the code and then upload and run the new version of the workflow, we notice that the behavior
pyflyte --pkgs path.to.code package --image my_image --force --fast
is the same as before. As I understand it now, this could be because the Spark executors still have the previous code. Does this work as intended ? TL;DR : can we use
to avoid building / pushing Docker images for Spark tasks ?
pyflyte run --remote --image <http://ghcr.io/flyteorg/flytecookbook:k8s_spark-43585feeccabc8a48452dc6838426f3acf4c6a9d|ghcr.io/flyteorg/flytecookbook:k8s_spark-43585feeccabc8a48452dc6838426f3acf4c6a9d> pyspark_pi.py my_spark --triggered_date now
Théo LACOUR02/10/2023, 9:56 AM
with a correct project, domain and service account. Then I use the console to run the workflow. The behavior I am expecting : the new code packaged in my .tgz file should be used by the Spark executors, since I used the
flytectl register files --archive flyte-package.tgz etc.
tag. What I observed : the Spark driver code is updated, but the Spark executors code is the one from the image, not the one from the .tgz file. I wonder if this is expected behavior, and if it is, how can I register a workflow with new code without having to re-build a Docker image ?
Evan Sadler02/10/2023, 3:09 PM
Evan Sadler02/10/2023, 3:17 PM
pyflyte register -*-destination-dir .*
Théo LACOUR02/10/2023, 3:51 PM
Evan Sadler02/10/2023, 3:52 PM
or whatever I had set it to. Good luck!
Théo LACOUR02/13/2023, 1:17 PM
in my command `flytectl register etc.`which has the same effect as
. It did not solve my problem (as my code did actually run in
pyflyte register --destination-dir etc.
) so I might write a GitHub issue later, unless this works 'as intended' or as a limitation of Spark (i.e. Spark executors should be expected to pull the image and use it 'as is' instead of using the updated code).
Tyler Su02/15/2023, 11:11 PM
Théo LACOUR02/16/2023, 9:23 AM
Yini Gao04/12/2023, 10:54 AM
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100484.76/warc/CC-MAIN-20231203030948-20231203060948-00442.warc.gz
|
CC-MAIN-2023-50
| 1,997
| 25
|
http://www.draconinteractive.com.au/
|
code
|
Who am I?
Dracon Interactive was form in early 2016, a solo-indie game development business focused on creating unique and enjoyable games.
My name is Peter Carey. I am the mind behind Dracon, and you should feel free to learn more about me at the "About" page!
Projects in construction, for your viewing pleasure
The Eternal Series
A series of games including;
A 3rd Person Action RPG set in a dystopian fantasy world besieged by a draconic migration
A village-management RPG with ties into other Dracon games.
Veiled Project. Keep an eye out for announcements!
- Draconic Login Status: In Further Production
- I have secured a fulltime Game Developer contract at St John Western Australia. This is an amazing experience that is teaching me many facets of game design that I had never even dreamed of!
- Check out my work there --> here <--
- Shadergraph is love. Shadergraph is life. Check below for my latest work!
- Eternal Conflict is getting some new arms, and animations, and I have just finished upgrading it to the High-Definition Render Pipeline, which has had a significant improvement on the graphics.
- HDRP is in! And it looks awesome (will post pictures)
- The arms are set to take more than a little bit of time
Stay in touch
Subscribe to our email list for regular updates and awesomeness!
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668529.43/warc/CC-MAIN-20191114154802-20191114182802-00235.warc.gz
|
CC-MAIN-2019-47
| 1,306
| 18
|
https://devpost.com/software/nft-factory-bonus-space-game-demo
|
code
|
This bonus project uses the NFT Generator Framework to build a NFT space shooting game.
The game allows you to mint a basic ship, fight, fetch new parts, then upgrade your spaceship on-chain.
Check out the Github repository for more info and a demonstration.
To start the game, just do
yarn install yarn start
Then open http://localhost:8080
Log in or sign up for Devpost to join the conversation.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943750.71/warc/CC-MAIN-20230322051607-20230322081607-00292.warc.gz
|
CC-MAIN-2023-14
| 397
| 7
|
https://twitch.uservoice.com/forums/929953-igdb?category_id=389371&status_id=5348695
|
code
|
http://retroachievements.org/ is a platform which provides achievements for retro games running in emulators, Currently there are only achievement categories for Steam, PSN and Xbox.
If we had a RetroAchievements category we could add achievements to thousands of games which may never have achievement's otherwise. It would be very cool for things like GOG Galaxy 2.0 integration plugins6 votes
This is interesting for sure, we would have to look into it. One thing that comes to mind is whether or not those achievement lists are official, or approved by the original developer/publisher, as this could prove tricky from an IP perspective. Since we would redistribute the data, we’re very careful about the source and provenance.
We’ll be looking into this internally first. Thanks!
IGDB have ability to export your data, which is greate, and it would be nice to have a way to import this exported data back to your account.2 votes
Could you clarity what do you mean by data in this case, is it to do with the lists and ratings in particular?
Curious to learn more about such a use case, is there a particular need to re-upload the exported CSV or Excel document rather than use the website?
Could you give us more information what you mean in this case? By status do you mean adding a new one along side Alpha, Beta, Early Access, Cancelled etc..?
If so how would you define Dropped, it sounds like it would be similar to cancelled, as in the development has been in definitely put on hold.
- Don't see your idea?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016373.86/warc/CC-MAIN-20220528093113-20220528123113-00345.warc.gz
|
CC-MAIN-2022-21
| 1,520
| 10
|
https://blogs.msdn.microsoft.com/windows-embedded/2013/02/14/intelligent-systems-on-parade-at-techdays/
|
code
|
Posted By Myriam Semery
Windows Embedded BG Lead
This was my fourth year participating in the TechDays France event with the Windows Embedded team. It’s been great to witness the event evolve and grow each year. In fact, Microsoft estimates that over 17,000 visitors attended this year.
The theme for our booth this year was around retail and hospitality. Windows Embedded extends Windows technologies to intelligent systems. Windows Embedded is already well installed and known in the retail industry and works with key technological partners, such as IER, Acrelec, Improveeze, IPM and Itelios, that work with most of the key retailers in France to deliver intelligent systems and seamless, personal retail experiences to customers. We invited our partners to demonstrate the exciting and innovative work they have created using Windows Embedded. These solutions offer a perspective to retailers and new experiences for the customers. Take a look.
The first Bluecar electric cars were delivered for the Autolib' car-sharing program. IER created an intelligent system to make it quick and easy for consumers to sign up and access the vehicles. The service began on December 5, 2011 with 250 Bluecars available to the public, and rising to around 1,750 cars today. The intelligent system enables immediate identification of the customer, which enables retrieval of real-time user preferences and settings to personalize the driver’s environment.
ITelios’ Windows Embedded 8-based solution ITSHOES provides a real-time cross-channel customer experience. It demonstrates how a consumer can identify, read articles, place orders or manage their loyalty account on a website e-commerce, mobile and in-store device such as a Microsoft PixelSense table. Once the customer is identified, it is possible to load their preferences and purchasing history from Microsoft Dynamics CRM to offer a personalized experience.
With Terminal IER 960, IER offers shoppers a fast and simple way to check out. Without having to unload each item from the basket, the solution identifies all items in a batch process by RFID tags, disables any security labels and provides the shopper with their total. IER960 runs on Windows Embedded Standard 7. Additional data, such as calories or potential allergy information for each item, can be stored in the RFID tag, or alternatively in the cloud for bigger volumes of data to enable real-time informed decision-making on this innovative POS system.
Improveeze’s multi-touch, Kinect-enabled kiosk, C4Shop Focus, is designed for retailers who want to expand their product offering to allow customers to virtually shop with screens or touch pads. Improveeze chose to deveelop its solution on Windows Embedded Standard 8.
Working with Improveeze, IPM also demonstrated the Windows Embedded Standard 7-based Borne EK3000-STD EasyShopper, which enables retailers to provide consumers access to electronic catalogs, alerting them in a few clicks whether the item is available in the store. Customers can choose the method of delivery and check out right at the kiosk, enabling the retailer to secure immediate sales for items that have run out of in-store inventory.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583804001.73/warc/CC-MAIN-20190121172846-20190121194846-00055.warc.gz
|
CC-MAIN-2019-04
| 3,186
| 9
|
https://www.pensionplanpuppets.com/2012/2/29/2834849/leafs-suck
|
code
|
Credit to Down Goes Brown for the title, he read it in a newspaper like a hundred years ago or something (I can't remember ever reading a newspaper except for the USA Today they give you at crummy hotels).
Once again the Leafs were outgoaled. So the team sucks, even though they almost beat another sucky team. That doesn't mean we suck. We're pretty cool I think. We broke the blog a few times and we're top notch at dick and fart jokes. This won't hold us back Marlies fans, no sir.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499857.57/warc/CC-MAIN-20230131091122-20230131121122-00323.warc.gz
|
CC-MAIN-2023-06
| 484
| 2
|
https://supportforums.cisco.com/discussion/10456171/network-connectivity-help
|
code
|
We have issues with my Cluster server not being able to ping server on the same subnet
Cluster server is patched to core switches and this is a physical HP blade server
I was able to ping the gateway address: 10.55.0.1 but was unable to ping: 10.55.0.134
or any other server on the same server, which is as you see on the same subnet and VLAN
I have attached ping and tracert results as you can see what is happening.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721008.78/warc/CC-MAIN-20161020183841-00493-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 417
| 5
|
https://forum.duolingo.com/comment/18998063/%CE%9F-%CE%BF%CF%80%CE%B1%CE%B4%CF%8C%CF%82-%CE%B5%CE%AF%CE%BD%CE%B1%CE%B9-%CF%87%CE%B1%CF%81%CE%BF%CF%8D%CE%BC%CE%B5%CE%BD%CE%BF%CF%82
|
code
|
True, "φίλαθλος" can only refer to sports, whereas "οπαδός" could refer to other stuff as well. But you could say there is another difference.
"Φίλαθλος" is someone who likes, who loves, who's a "friend" (φίλαθλος = φίλος + άθλος) of a sport (or the team, but mostly the sport itself).
"Οπαδός" is someone who likes or loves the actual team, not necessarily the sport. Team comes before the sport for them.
I hope I helped you enough^.^
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738864.9/warc/CC-MAIN-20200812024530-20200812054530-00363.warc.gz
|
CC-MAIN-2020-34
| 482
| 4
|
http://hypermonde.net/
|
code
|
Roxame.1 was developed int 2000-2005, and explored the creative field : algorithms*random. It's original website is still ontline.
Roxame.2's development started in 2015, and explores the field algorithms*data, and aims to produce any kind of media, from a simple bit to a film, using the same principles. She (see Introduction) integrates and extends the algorithms of Roxame.1, but refrains from using the random() function.
In July 2016, we published an Interim Report summarizing our research since 2000 and explaining our feelings and new aims.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496669.0/warc/CC-MAIN-20200330054217-20200330084217-00094.warc.gz
|
CC-MAIN-2020-16
| 549
| 3
|
https://www.postjobfree.com/resume/ac8wd6/python-tableau-scientist-st-charles-mo
|
code
|
636-***-**** e-mail id : email@example.com
Passionate Data enthusiast, having around 6 years of experience as professional qualified Data Scientist in Statistical modelling, Machine Learning, Data mining and Data Visualization.
Developing different Statistical Machine Learning, Text Analytics and Data Mining solutions to various business scenarios and generated Data Visualizations using Python and Tableau.
Strong ability to analyze sets of data for signals, patterns, ways to group data to answer questions and solve complex data puzzles.
Good knowledge on statistical analysis techniques like Confidence Interval, Hypothesis testing, ANOVA, Conjoint analysis, sentiment analysis and semantic analysis.
Comprehensive experience in developing solutions to complex business problems using Machine Learning models and gave feasible visualization using Python.
Adapt and deep understanding of Statistical modeling, Model testing, problem analysis model comparison, Optimization and Validation.
Constructed multiple Machine Learning models using Python’s scikit-learn libraries and used numpy, Pandas libraries to work with Data-frame, dictionaries, numpy-arrays.
Strong hands-on work experience in implementing Linear, Multi-Linear Regression, Logistic Regression, SVM, Naïve Bayes, Decision Trees, Random Forest Classifiers, Natural Language Processing and K-means Clustering methods to solve various business problems.
Implemented deep learning techniques like Artificial Neural Networks, Recurrent Neural Networks and Convolutional Neural Networks using Tensorflow, Keras and used Theano to solve various business problems.
Forecasted sales, demand for loans and future values using Time-series modelling techniques like Autoregressive, Moving Average, ARIMA and Holt-Winter.
Highly skilled in advanced Regression modeling, Time series analysis, Correlation, Multivariate analysis.
Developed viable visualization to display results and explained results visually using Python packages such as Seaborn, matplotlib, ggplot and pygal.
Extracted data and worked with data from multiple database sources like Oracle, SQL Server, DB2, mongo DB, Cassandra, NoSQL and Teradata.
Generated viable representations of ML models using Tableau for client and higher management.
Extensive knowledge of Data Science Lifecycle, SDLC, waterfall and Agile methodologies and used Agile methodologies to develop software products.
Proficient in Statistical modeling, Applied Mathematical methods and having expert knowledge in various business and engineering domains.
Forecasted behavior of Mechanical, thermal, fluid systems by creating mathematical model using linear, multi-linear and non-linear Regression and performed fault analysis of systems.
Professional, enthusiastic and self-driven leader having led multiple teams to analyze real-world business problem and collaborated with scientists, engineers with vision of adding value to business from data through Data Science and Machine Learning techniques.
Relocation: anywhere in United States
No sponsorship required to work in the United States
J.C. Penny, Plano,Texas February 2017 to present
Data Scientist 1
Role Summary: JC Penny is omnichannel Retail chain having over 870 stores in United States, Puerto Rico, revolutionizing shopping and investing in technology and resources to make shopping experience easy and seamless across all channels and devices, offering convenient delivery and pickup options. My Job involves creating a Customer lifetime value model to reduce Customer churn and performed sales forecasting.
Communicated with management to discuss insights obtained from data, assisted in taking best business decisions, reduced Customer Churn by 10% in few months of implementation by extracting value from data.
Performed Customer segmentation based on customers behavior, demographics, transactions and customer specific details like age, income and created multiple customer classes.
Constructed customer classes with historical, demographic and behavioral data as features using Random Forest Classifier and Logistic Regression to help marketing team understand purchase pattern of customers.
Assisted marketing team to devise business strategy to target customers with discount coupons, deals and offers to improve customer purchases.
Identified distinct patterns in which customers respond to offers and clustered their actions using K-means, K-means++ Clustering, Hierarchical Clustering and segmented them into different groups, helped marketing team to further analyze behavioral patterns of customers.
By using Multi-Linear Regression algorithm we created the Customer lifetime value (CLV) from the customers first six months of data, identified high and low value segments, helped employer to understand customers and improve customer service to retain customers.
For the better revenue generation finally proposed marketing strategies to target potential customers using their first three months data and regression model from this we evaluated CLV for every new customer .
Collaborated with risk management team and provided insights using various analysis models from python libraries like pyfolio, empyrical, qfrm and VisualPortfolio.
Investigated large datasets to handle missing values, cleaned messy datasets and applied feature scaling to standardize range of independent variables.
Researched predictive models including Logistic Regression, Support Vector Machine (SVC) and re-enforcement learning to prevent retail fraud.
Improved model performance by tuning hyper-parameters using optimization techniques like Grid search, Random search and Bayesian optimization and increased model efficiency by XG-Boosting
Validated models using Cross validation, loss function to measure model performance and created Confusion Matrix, ROC and CAP curves. Addressed overfitting and underfitting by tuning hyperparameters using L1 and L2 Regularization
Applied dimensionality reduction technique like Principal Component Analysis (PCA) to extract relevant optimal features from high dimensional data.
Forecasted sales from historical sales data using Time-series modelling techniques like ARIMA and Holt-winter model. Assisted supply chain management team in meeting customers demand and maintaining stock at stores.
Visualized results using Matplotlib, Seaborn libraries of scikit-learn and used Tableau to present results on dashboards for team members, management and other relevant departments in company.
Client: Wells Fargo October 2014 to November 2016
Role Summary: Wells Fargo & Company is an American multinational financial services company headquartered in San Francisco, California, with central offices throughout the United States. It is the world's fourth-largest bank by market capitalization and the third largest bank in the US by total assets. Involved in evaluating customer credit data and financial statements in order to determine the degree of risk involved in lending money.
Developed predictive solutions to support commercial banking team using machine learning algorithms such as Linear Regression, Logistic Regression, Naive Bayes, Decision Trees, Random Forest, Support Vector Machine in Python.
Conducted analysis in assessing customer behaviors with clustering algorithms such as K-Means Clustering and Hierarchical Clustering.
Evaluated parameters with K-Fold Cross Validation, Grid search methods to optimize performance of models
Worked on data cleaning, data preparation and feature engineering with Python, including NumPy, SciPy, Matplotlib, Seaborn, Pandas, and Scikit-learn.
Along with data analytics and Excel data extracts, Implemented Agile Methodologies, Scrum stories and sprints in a Python based environment .
Design and build world-class high-volume real-time data ingestion frameworks and automate various data sources into Bigdata technologies like Hadoop etc.
Used Pig as ETL tool to do transformations, event joins and some pre-aggregations before storing the data onto HDFS.
Used MySQL and created Sql tables and involved in data loading and writing Sql UDFs.
Experience designing and optimizing complex SQL queries involving table joins using MySQL.
Worked in Tableau environment to create weekly, monthly, daily reports using tableau desktop & publish them to server.
Worked on importing and exporting data from Oracle into HDFS using Sqoop.
Worked on Excel using VLOOKUP, pivots, conditional formatting, large record sets, data manipulation and cleaning.
Used GIT HUB as version control software to manage the source code and to keep track of changes to files which is fast and light weight system.
Environment: Python, MySQL, SAS, Pig, HDFS, Hive, Excel, Tableau and GIT
Client: Seasonal Tastes July 2014 to September 2014
Role Summary: Seasonal Tastes is restaurant situated at Gurgaon, Mumbai and Hyderabad which serves Chinese, Asian, International and Traditional vegetarian Indian cuisine. My role involves identifying customers sentiment about food and service using reviews from various websites and to assist in shaping advertisement strategies, improve customer service and increase customer base for more business.
Performed sentiment analysis of customer reviews and classified each review into good, bad and neutral class to understand pulse of customers about business.
Implemented Porter Stemmer (Natural Language Tool Kit) with NLP bag of words model using Count Vectorizer class to process text data.
Created predictive model using LSTM, Recurrent Neural Networks (RNNs) and studied reviews, obtained feedback on customer service to help employer reduce customer churn.
Experimented with other classification models like Random Forests, Logistic Regression and Naïve Bayes to classify customers reviews.
Extracted data from web using Web Scraping, Text mining and preprocess data into tab separated file to separate reviews by tab in data.
Cleaned dirty data and prepared data for feature extraction using Count Vectorizer of scikit-learn feature extraction library.
Automated customer service by creating chat box which responds to customer queries using deep learning and text processing with nltk of NLP library.
Evaluated model performance by creating confusion matrix, classification report and accuracy score. Improved model performance by k-fold cross validation and XG-Boosting and achieved model accuracy of 92%.
Developed Recommender systems using Apriori associate rule learning, sales data. Recommended attractive deals, cuisines and increased number of customers by 15%, worked with marketing team to devise powerful marketing strategy.
Demonstrated experience in design and implementation of Statistical models, Predictive models, enterprise data model, metadata solution and data lifecycle management in both RDBMS, Big Data environments.
Presented simple visualization of results using seaborn visualization libraries of Python.
Increased client business by 10% in six months by efficiently transforming customer service based on feedback obtained from sentiment analysis.
.Client: Westpac Banking Corporation, India August 2013 to June 2014
Role Summary: Westpac is Australian bank and financial-services provider. Westpac has 14 million customers and employs almost 40,000 people. Job involves collecting data from various data sources and pump it through informatica workflows to store it into data warehouse. This project involves data correction, business logic implementation using PL/SQL and other scripting languages like Shell scripting.
Acquired data from primary or secondary data sources and maintain databases/data systems.
Established new client data preparing them for entry into new platform.
Loaded data by converting CSV file into corresponding database tables.
Worked with management team to create prioritized list of needs for each business segment.
Monitored and resolved issues of data flow on daily basis. Also created views for reporting team to use data for marketing numbers on daily basis.
Collaborated with reporting team to resolve data discrepancies and logical data corrections which are occurring throughout reports.
Generated Tableau ad-hoc reports using excel sheet, flat files, CSV files.
Used data mining techniques for outlier detection and created algorithm to connect patterns between customer trends.
Created Software solutions in Software development lifecycle (SDLC) and Agile methodologies environment.
Performed computational tasks on data by creating pig, hive and Map reduce scripts to access and transform data in HDFS.
Developed and implemented metadata models for reporting functionalities and developed automated process for data corrections.
Written SQL, NoSQL and PL/SQL scripts to extract data from database and for testing Purposes.
Reviewed logical model with application developers, ETL team, DBAs, and testing team to provide information about data model and business requirements.
Identify and log defects if/when test fail, using SQL to narrow down root cause of problem for efficient investigation by development team and log accordingly.
Used advanced Excel functions to generate spreadsheets and pivot tables.
Masters : Computer and Information Sciences
Bachelors : Electronic and Computer Sciences
References available upon request
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738982.70/warc/CC-MAIN-20200813103121-20200813133121-00042.warc.gz
|
CC-MAIN-2020-34
| 13,260
| 89
|
https://daq00.triumf.ca/MidasWiki/index.php?title=Main_Page&oldid=954
|
code
|
MIDAS is a modern data acquisition system developed at PSI and TRIUMF. Supported hardware includes VME, Fastbus, CAMAC, RS232, GPIB, USB, ethernet, fiber optic and MSCB attached data acquisition devices. MIDAS is written in C/C++ and runs on Linux, MacOS and MS Windows. It is licensed under the GPL.
On August 2013, the complete MIDAS documentation has been moved to TRIUMF for consistency and can be accessed via http://midas.triumf.ca. Only a (very historic) introduction is still accessible at PSI.
- https://bitbucket.org/tmidas/midas - MIDAS GIT Repository
- https://midas.triumf.ca/forum - MIDAS discussion forum
- http://daq.triumf.ca - TRIUMF DAQ Wiki
- http://ladd00.triumf.ca/~daqweb/doc/midas/doc/html - MIDAS Doxygen documentation
All MIDAS programs are under the GNU Public License
The GIT tree is available via a web interface. Using this interface, the most recent file versions can be obtained.
Alternatively, the files can be directly obtained via GIT by entering:
git clone https://bitbucket.org/tmidas/midas git clone https://bitbucket.org/tmidas/mxml cd midas make cd doc make
Supported operating systems are:
- primary: MacOS, RHEL/SL/SLC Linux
- secondary: MS Windows, Ubuntu Linux, Fedora Linux, FreeBSD, VxWorks, others (see Makefile)
The bitbucket repository is mirrored nightly to TRIUMF at http://ladd00.triumf.ca/~daqweb/git. Please use this mirror to install stable releases of MIDAS. To follow the latest development version, please clone from bitbucket.
News about bug fixes and new releases of Midas can be obtained from the Midas news group located at TRIUMF which uses the ELOG system. Users can register in this system to be notified automatically via E-mail when new entries are submitted.
Another source of information is the Bitbucket Repository of MIDAS, where one can see the latest changes to the software.
- ELOG - Electronics Logbook from PSI
- ROOT - data analysis package from CERN
- ROOTANA - ROOT-based analyzer for MIDAS
- ROODY - viewer for online histograms
- http://midas.psi.ch/rome/ - MEG/PSI data analysis package
- Pre-GIT documentation at TRIUMF
- Documentation at TRIUMF
- Historical files collection (PSI)
- Historical files collection (TRIUMF)
- New Wiki Documentation (in construction)
- MIDAS installation instructions for TRIUMF experiments
- Common problems & Debugging recipes
- Format of MIDAS history files
- Note on the alarm system
- Note on the ODB hotlink function (db_open_record())
- Note on race condition and deadlock between ODB lock and SYSMSG lock in cm_msg()
- Note on prevent start due to alarm or required programs
- List of various minor modification to be put into the right place of the documentation
- Documentation of mhttpd.js
- Documentation of MIDAS AJAX functions
- Note on the new history configuration
Consult the User's Guide for information on using the wiki software.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224643462.13/warc/CC-MAIN-20230528015553-20230528045553-00623.warc.gz
|
CC-MAIN-2023-23
| 2,862
| 38
|
https://www.fi.muni.cz/~kas/blog/index.cgi/2015/07/index.writeback
|
code
|
Tue, 14 Jul 2015
Which Web Gallery?
I am looking for the best way how to publish my photos on the Web. So far I have ruled out putting my photos to some "cloud" service out of my control (Picasa, Flickr, Rajče). I want something which could generate a static tree of files (HTML/CSS/JPG/JS), which can then be published by any web hosting service, or even on my own server.
Some time ago I have tested Highslide.js, but this is more lightbox than a gallery, and it cannot adapt itself to the size of the screen.
What looks promising so far, is the thing named Photoswipe. There still are some problems, though:
- When the image has much wider aspect ratio than the screen, the image caption is displayed far away from the image itself.
- The thumbnail view somewhat sucks (see the thumbnail lists near the bottom of their own getting started page.
So, my dear lazyweb: which gallery for static files do you use? I would like to have something with the following properties:
- Works on different screen sizes (even Picasa sucks at this).
- Easy to generate all the data from large JPEGs with comments/title.
- The ability to link individual images (Highslide sucks at this).
What would you recommend?
5 replies for this story:
Vašek Stodůlka wrote:
I know, that it is maybe something different, then you are searching, but I'm using trovebox. You can see it live at fotky.stodulkovi.cz. It supports private and public stuff, sharing by creating link, albums, tags (!) and is quite fast. What is bad is, that trovebox is discontinued, but it is still by far the best private-hosted gallery I have seen, and I have searched a lot. (I had to do also some tweaks to have it working the way I want to.)
Yenya wrote: Re: Vašek Stodůlka
Apparently your trovebox requires cookies or whatever - I was not able to make it display any photo at all - just the surrounding text and an empty page.
Vašek Stodůlka wrote:
Hm, you are right with cookies, I have never tried this. :-) There is session ID cookie. IMHO there is nothing wrong on cookies, as long as they are used by the server, which originally issued them. BTW - Picasa works without cookies?
Yenya wrote: Re: Picasa
Picasa without cookies? No idea, I don't use Picasa. And for things like Google Excel (or whatever, people occasionally send me links to that crap and want me to write something in there) I tend to have a special Firefox session which has cookies for TheBigBrother.com allowed.
Michal wrote: JAlbum
How about JAlbum?
Reply to this story:
Mon, 13 Jul 2015
Systemd Developer Attitude
Systemd. Some people love it, some people hate it. My own position is somewhere in between: I think many things they are trying to solve are real problems which need solutions, the system should "just work" for common use without the configuration, etc. But sometimes the overall attitude of the systemd developers is just plain wrong. The following bug report shows the problem pretty clearly:
TL;DR: it can be summarized as follows:
systemd-timeduses Google time servers by default.
- These time servers are sometimes wrong because of the non-standard "leap second smearing" done by Google.
- Google has asked that their servers are not set up as defaults in
There are several solutions to this problem which I would consider clean and fair:
- Remove the default time servers from the configuration, let the user decide (e.g. to use a NTP pool).
- Register a NTP pool vendor zone and use it as defaults.
- Let somebody else register and maintain a NTP pool vendor zone (CoreOS people offered to do this).
The systemd maintainer's response was "we are not a vendor, we don't want a vendor pool", and "let's add a warning when somebody uses the defaults". I think using Google servers against the will of their owner is pretty rude, and having the defaults which need to be replaced, even though the possibility of having sane defaults exists, to be inconsiderate to their users.
In my opinion, the above clearly shows the attitude of systemd developers towards the rest of the world.
0 replies for this story:
Reply to this story:
Fri, 10 Jul 2015
My First CVE Number
After banging our collective heads against the wall while trying to discover why one Samba share works as we expect, while another one with the same configuration on the same server does not, I have finally admitted that the bug is not in our setup, but probably in Samba itself.
Interestingly enough, the expected behaviour was the share where it did not work, and the other one worked only by accident. The fact that it worked in one case turned out to be a potential minor security issue. So this is the first security issue I have discovered, which has its own CVE number: CVE-2015-3287 (details will be in Samba bug #11395 after it is declassifiled).
I appreciate the fast response of Samba developer Jeremy Allison: the first fix was available within 3.5 hours after the bug was reported.
1 replies for this story:
Peter Kruty wrote:
3,5h,that pretty fast. Nice.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817103.42/warc/CC-MAIN-20240416155952-20240416185952-00619.warc.gz
|
CC-MAIN-2024-18
| 4,990
| 47
|
https://sites.google.com/site/bibledit/gtk/installation/debian/bibledit-4-1-on-debian-5-0
|
code
|
The instructions assume that Debian was installed with all defaults, including the graphical Desktop.
Run Desktop - Administration - Synaptic Package Manager.
Install the following packages:
g++ libgtk2.0-dev libsqlite3-dev libxml2-dev git-core libenchant-dev libgtkhtml3.14-dev rcs libgtksourceview2.0-dev libwebkit-dev libdbus-glib-1-dev curl apache2 php5 libsoup2.4-dev
Apply the changes.
When through, open a terminal by clicking Applications, Accessories, Terminal.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125532.90/warc/CC-MAIN-20170423031205-00072-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 470
| 6
|
https://libguides.rutgers.edu/c.php?g=336983&p=2268246
|
code
|
Looking for a full-length biography of someone? Go to The Catalog (QuickSearch), and select "Advanced Search." Use the pull-down menu to change "Any Field" to "Subject." Then enter the name of the person in which you are interested. For example:
If you're looking for works for which that person is responsible for most of the content (an autobiography; photographs; news reports; etc.) search as above, but now change "Any Field" to "Author". For example:
Rutgers, The State University of New Jersey, an equal access/equal opportunity institution. Individuals with disabilities are encouraged to direct suggestions, comments, or complaints concerning any accessibility issues with Rutgers web sites to: email@example.com or complete the Report Accessibility Barrier / Provide Feedback Form.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107912593.62/warc/CC-MAIN-20201031002758-20201031032758-00206.warc.gz
|
CC-MAIN-2020-45
| 791
| 3
|
https://chainerweb.com/how-to-become-a-devops-engineer-in-six-months-or-less-part-4-package/
|
code
|
In Part 1, we talked about the DevOps culture and the foundations required:
How To Become a DevOps Engineer In Six Months or Less
NOTE: This is Part 1 of a multi-part series.medium.com
In Part 2, we discussed how to properly lay the foundation for future code deployments:
How To Become a DevOps Engineer In Six Months or Less, Part 2: Configure
NOTE: this is Part 2 of a multi-part series. For Part 1, please click here.medium.com
In Part 3, we talked about how to keep your deployed code organized:
How To Become a DevOps Engineer In Six Months or Less, Part 3: Version
NOTE: this is Part 3 of a multi-part series. For Part 1, please click here. Part 2 is here.medium.com
Here, we’ll talk about how to package your code for easy deployment and subsequent execution.
For reference, we are here in our journey:
NOTE: You can see how every part builds on the previous part and lays the foundation for the subsequent part. This is important and is done on purpose.
The reason is, whether you are talking to your current or future employers, you have to be able to articulate what DevOps is and why it’s important.
And you do so by telling a coherent story — a story of how best to quickly efficiently ship code from a developer’s laptop to a money-making prod deployment.
Therefore, we are not after learning a bunch of disconnected, trendy DevOps tools. We are after learning a set of skills, driven by business needs, backed by technical tools.
And remember, each phase is about a month worth of learning, for a total of six months.
OK, enough chatter, let’s get to it!
Primer on Virtualization
Remember physical servers? The ones you had to wait weeks to be PO-approved, shipped, data center accepted, racked, networked, OS-installed and patched?
Yeah, those. They came first.
Essentially, imagine if the only way to have a place to live is to build a brand new house. Need a place to live? Wait for however long it takes! Kind of cool since everybody gets a house but also not really because it takes a long time to build a house. In this analogy, a physical server is like a house.
Then people got annoyed about how long everything took and some really smart people came up with the idea of virtualization: how to run a bunch of pretend “machines” on a single physical machine and have each fake machine pretend to be a real machine. Genius!
So, if you really wanted a house, you could build your own and wait six weeks. Or you could go and live in an apartment building and share the resources with other tenants. Maybe not as awesome but good enough! And most importantly, there is no wait!
This went on for a while and companies (i.e. VMWare) made an absolute killing on this.
Until other smart people decided that stuffing a bunch of virtual machines into a physical machine is not good enough: we need more compact packing of more processes into fewer resources.
At this point, a house is too expensive and an apartment is still too expensive. What if I just need a room to rent, temporarily? That’s amazing, I can move in and out at a moment’s notice!
Essentially, that’s Docker.
NOTE: As of December 2018,
Birth of Docker
Docker is new but the idea behind Docker is very old. An operating system called FreeBSD had a concept of jails that dates back to 2000! Truly everything new is old.
The idea then and now was to isolate individual processes within the same operating system, in what is known as “operating system level virtualization.”
NOTE: This is not the same thing as “full virtualization”, which is running virtual machines side by side on the same physical host.
What does this mean in practice?
In practice, this means that the rise of Docker’s popularity neatly mirrors the rise of microservices — a software engineering approach where software is broken into many individual components.
And these components need a home. Deploying them individually, as stand-alone Java applications or binary executables is a huge pain: how you manage a Java app is different from how you manage a C++ app and that’s different from you manage a Golang app.
Instead, Docker provides a single management interface that allows software engineers to package (!), deploy and run various applications in a consistent way.
That is a huge win!
OK, let’s talk more about the pros and cons of Docker.
Benefits of Docker
Docker allows every service to have full process isolation. Service A lives in its own little container, with all of its dependencies. Service B lives in its container, with all its dependencies. And the two are not in conflict!
Moreover, if one container crashes, only that container is affected. The rest will (should!) continue running happily.
This benefits security as well. If a container is compromised, it is extremely difficult (but not impossible!) to get out of that container and hack the base OS.
Finally, if a container is misbehaving (consuming too much CPU or memory) it is possible to “contain” the blast radius to that container only, without impacting the rest of the system.
Think about how the various applications are built in practice.
If it’s a Python app, it will have a slew of various Python packages. Some will be installed as pip modules, others are rpm or deb packages, and others are simple git clone installs. Or, if done with virtualenv, it will be a zip file of all the dependencies in the virtualenv directory.
On the other hand, if it’s a Java app, it will have a gradle build, with all of its dependencies pulled and sprinkled into appropriate places.
You get the point. Various apps, build with different languages and different runtimes pose a challenge when it comes to deploying these apps to prod.
How can we possibly keep all of the dependencies satisfied?
Plus, the problem is worse if there are conflicts. What if service A depends on Python library v1 but service B depends on Python library v2? Now there is a conflict since both v1 and v2 cannot co-exist on the same machine.
Docker allows not only for full process isolation but also for full dependency isolation. It is entirely possible and common to have multiple containers running side by side, on the same OS, each with its own and conflicting libraries and packages.
Again, how we manage disparate applications differs between applications. Java code logs differently, is started differently and monitored differently from Python. And Python is different from Golang, etc.
With Docker, we gain a single, unified management interface that allows us to start, monitor, centralize logs, stop, and restart many different kinds of applications.
This is a huge win and greatly reduces operational overhead of running production systems.
NOTE: As of December 2018, you no longer have to make a choice between fast startup of Docker and security of VMs. Project Firecracker, courtesy of Amazon, attempts to fuse the best of both worlds.
However, as awesome as Docker is, it has downsides.
First, running Docker is still running servers. Servers are brittle and flaky. They must be managed, patched and otherwise cared for.
Second, Docker is not 100% secure. Not like a VM, at least. There is a reason why huge companies that run hosted containers do so inside VMs, not on top of bare metal. They want fast startup times of containers and security of VMs!
Third, nobody really runs Docker as is. Instead, it is almost always deployed as part of a complex container orchestration fabric, such as Kubernetes, ECS, docker-swarm or Nomad. These are fairly complex platforms that require dedicated personnel to operate (more on these solutions later).
However, if I’m a developer, I just want to write code and have somebody else run it for me. Docker, Kubernetes and all that jazz are not simple things to learn — do I really have to?!
Short answer is, it depends!
For people who just want somebody else to run their code, AWS Lambda (and other solutions like it) are the answer:
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume — there is no charge when your code is not running.
If you have heard of the whole “serverless” movement, this is it! No more servers to run or containers to manage. Just write your code, package it up in to a zip file, upload to Amazon and let them deal with the headache!
Moreover, since Lambdas are short lived there is nothing to hack — Lambdas are pretty secure by design.
Great, isn’t it?
It is but (surprise!) there are caveats.
First, Lambdas can only run for a max of 15 minutes (as of November 2018). This implies that long running processes, like Kafka consumers or number crunching apps cannot run in Lambda.
Second, Lambdas are Functions-as-a-Service, which means your application must be fully decomposed into microservices and orchestrated with other complex PaaS services like AWS Step Functions. Not every enterprise is at this level of microservice architecture.
Third, troubleshooting Lambdas are difficult. They are cloud-native runtimes and all bug fixing takes place within Amazon ecosystem. This is oftentimes challenging and non-intuitive.
In short, there is no free lunch.
NOTE: There are now “serverless” cloud container solutions as well. AWS Fargate is one such approach. However, I’m ignoring that for now since these tend to be fairly expensive and are still sparingly used.
Docker and Lambda are two of the most popular modern, cloud-native approaches to packaging, running and managing production applications.
They are often complimentary, each suited for slightly different use cases and applications.
Regardless, a modern DevOps engineer must be well versed in both. Therefore, learning Docker and Lambda are good short- and medium-term goals.
NOTE: Thus far in our series we have dealt with topics that Junior to Mid-Level DevOps Engineers are expected to know. In subsequent sections, we will start discussing techniques that are more suited for Mid-Level to Senior DevOps Engineers. As always, however, there are no shortcuts to experience!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649343.34/warc/CC-MAIN-20230603201228-20230603231228-00154.warc.gz
|
CC-MAIN-2023-23
| 10,050
| 75
|
https://support.bizbrains.com/l212/general
|
code
|
This tab contains general information about the partner.
You can change general information about the partner using the fields described below by pressing the 'Edit' button.. When you are done editing remember to press the 'Save' button at the bottom of the page. Click 'Cancel' to undo your changes.
EdiPortal Id
A Link specific Id. Note that the Edi Portal Id field is not editable as it is used internally by Link to uniquely identify the partner.
Partner Name
The name of the partner is displayed here.
Partner Type
The type of the partner can be changed here. Partners can be either internal or external. There is a clear distinction between internal andexternal partners. An internal partner is the company that owns the Link installation or a subdivision of it. External partners are the companies that are exchanging documents with the internal partner(s).
Accept Error Messages
If you mark Accept Error Messages, then the error messages created by Error Handling will be routed to this partner. To make Error Handling create an error message for a Partner, you have to mark Notify stakeholder,Notify sender, or Notify receiver, in the setup of Error Handling. Read more about Error Notifications on the 'Error Handling' page.
If you have any comments or notes about the partner, they can be written here.
Content on this page:
Please note, these errors can depend on your browser setup.
If this problem persists, please contact our support.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475833.51/warc/CC-MAIN-20240302152131-20240302182131-00467.warc.gz
|
CC-MAIN-2024-10
| 1,453
| 14
|
https://community.filemaker.com/thread/112156
|
code
|
Not sure what calculations you are using, or how your database is designed, but you may be able to skip the Calculation Field Types altogether and instead do the Calculations in ScriptManager when needed. Then your database would only have fields with data, which should make navigating quick no matter how many records you have. When needed you run a script which will then do the calculations.
There are several options. One is to take steps via scripts to keep your found sets always small. Don't allow a Show all records or such.
And with some data you can set up a "summary table" where the data is periodically updated to compute summary totals and store the totals in number fields so that your summary reports use the stored number fields from this table instead of summary fields that have to total large numbers of records.
Here's an example of something I've had working since FileMaker 4 or 5...
We have an invoicing type system. Since we buy scrap metal and redeem used beverage container deposits, they are really Purchase Orders, but the table structure is the same as you see with typical invoices:
Since we serve from 500 to 1000 customers a day with POs that have a minimum of 4 lineItem records each, Summary and cross tab reports of LineItems data spanning up to 5 years would need to crunch numbers from literally millions of records and would bring the system to a crawl while waiting for the progress bars to fill and the summary totals (of which we have for both average cost, total cost and total weight) update.
So I set up a script to run once a night that takes all the current day's line items and creates one record for each type of material purchased/redeemed with totals/averages computed and stored in number fields. That takes a set of data from up to 4000 records and "condenses" it down to about a dozen different records for the day.
And thus, a Five year comparison cross tab report with monthly totals and averages can be produced from this table without any noticeable delays.
But also note that this is pretty straight forward for data that once recorded, is never (or almost never) changed. Even so, it's taken an extra effort scripting wise to make sure that this summary table is always in agreement with the original line item data.
That should do a trick, because once data is entered, it is never changed (once the day goes by, data is set to remain unchanged)
Scripting solution should do a trick, i just need to know when to do it? (maybe on close? or script trigger onsave or something?)
Do you have example file?
The sample file would be a working copy of our DB and I'm not allowed to hand out copies of it. If this task were the only part of the script, I'd use a server schedule to run it. But since it also does some data imports from one FileMaker File to another to archive data (including the individual line items.) I've set up a robot file opened with Windows Scheduled Tasks that performs this script once a night just before the back up schedule backs up the file.
OnLastWindowClose might work, But since you might close a file more than once in a day and this could tie up that client for several minutes, it might not be the best option unless you added some additional code such as code that checks the time of day and only runs the script if it's after close of business or is set up to ask if you want to do this so that you can cancel out of it...
On the other hand, if you host from FileMaker Pro, you could set this up so that it only runs with OnlastWindowClose on the host computer by having the code check to see if the current computer is the host computer and exit if it is not.
I created another layout and named it "hard enter" for self-explanatory purposes.
And since I only need run it after data import, I would do the import via that layout and then just go through newly added records (find records with certain empty field, loop script, to set field from calculations).
Thanks for your inputs.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105108.31/warc/CC-MAIN-20170818194744-20170818214744-00156.warc.gz
|
CC-MAIN-2017-34
| 3,976
| 18
|
http://li64-187.members.linode.com:8080/browse/GMT-2318?workflowName=bug_new&stepId=6
|
code
|
Created an attachment (id=1374)
+ Run the attached script
+ Notice that every 6 rows the data repeats.
The report should contain the STM in different coordinate systems and so they should not be identical.
I need to add to the math spec to show how to perform this computation... S. Hughes.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391923.3/warc/CC-MAIN-20200526222359-20200527012359-00061.warc.gz
|
CC-MAIN-2020-24
| 290
| 5
|
https://www.br.freelancer.com/projects/php-jsp/parse-xml-jsp-file/
|
code
|
Please make JSP code for me to parse this XML:
[url removed, login to view]
and load the attributes into variables in a JSP file so I can design a search result page for my site.
2) Installation package that will install the software (in ready-to-run condition) on the platform(s) specified in this bid request.
3) Exclusive and complete copyrights to all work purchased. (No GPL, 3rd party components, etc. unless all copyright ramifications are explained AND AGREED TO by the buyer on the site).
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806609.33/warc/CC-MAIN-20171122141600-20171122161600-00301.warc.gz
|
CC-MAIN-2017-47
| 497
| 5
|
https://soft.vub.ac.be/amop/uf/profiles
|
code
|
Profiles represent the user identity in the framework. They are highly extensible and besides a number of mandatory fields, flockrs can add as many custom fields as they like. For example, users could add a field with their year of graduation and use it to create a flock of nearby flockrs which graduated the same year. Furthermore, the framework provides some infrastructure that makes it easy to add new custom data types without having to write too much boiler plate code.
When a Flockr modifies his Profile, a change event is propagated to the remote interfaces of all other Flockrs. In reaction to these events, all connected Flockrs propagate these events to the interested Flocks.
Each Flockr keeps a cached Profile of the other Flockrs such that the Profile can be consulted offline. When a Flockr is connected for which there is a cached Profile, he should propagate the necessary changed events to make sure that both the cached Profile and the Flocks of the other Flockrs are updated.
A Profile is a property object containing a number of named fields. Each field also has a Field Type. This Field Type is used to generate the appropriate user interface component (e.g. a text input field for text, a combobox for enumerations…) and to limit the possible values that can be assigned to the field. It is also used to specify which comparator operations are allowed on the field of this type.
Field Types are represented as Field Type objects. These objects have the following API:
Some of these methods should be overridden when a custom type is added.
The following Field Types are built in:
Each profile has the following mandatory fields:
Properties of Profiles are represented as slots in the profile objects. Each Profile object has a parent object that contains the common behavior and mandatory fields for each profile. Since profiles are frequently copied over the network, they are isolate objects.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662584398.89/warc/CC-MAIN-20220525085552-20220525115552-00741.warc.gz
|
CC-MAIN-2022-21
| 1,920
| 9
|
https://2015.jsconf.eu/speakers/ola-adedoyin-applying-3d-engineering-drawing-techniques.html
|
code
|
Web app technical diagrams fall into the trap of over simplifying complexity. And so diagrams which are meant to represent 1,000s or even 10,000s of hours of the large scale multidisciplinary construction effort that is a web app project is reduced to sticks-and-clouds architecture drawings, coarse wire frames and/or detailed UI mockups or screenshots.
What’s missing from these representations ? Usually a lot - like code volume, 3rd party dependencies, environments, delivery processes and so much more. This talk presents a new vision of web app technology diagrams using the structural engineering drawing technique of isometric projections to provide much better perspective of the multi dimensional nature of web app programming projects.
A taste of this new style of diagraming of web apps can be seen, today, at the speaker’s site stackynotes.com but diagrams with much more depth and breadth will be explored in the actual conference talk.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506646.94/warc/CC-MAIN-20230924123403-20230924153403-00104.warc.gz
|
CC-MAIN-2023-40
| 954
| 3
|
https://community.kore.ai/t/how-to-get-postback-action-data-back-in-conversation-flow-when-user-click-on-the-button-carousel-template/1729
|
code
|
Chat bots are all about conversation, processing text coming in from a user, and at any point in that conversation the user could say anything. They might be answering a particular question being asked by a bot, or they might be interrupting the current discourse, or just saying something witty. What this means is that chat bots are not wired up like traditional user interfaces, there is no explicit connection between a button press and some specific action.
Instead, everything happens through text. Whatever the user types is sent to the Kora NL platform where it is evaluated and interpreted from a variety of angles and contexts. Clicking on button in the chat window, be it from a custom template, or an ambiguity message, or a list of choices, does nothing more than send the postback text to the platform just as if the user had explicitly typed it out. The buttons merely save typing and help to guide the user instead of them staring at a flashing cursor wondering what to say.
Now you don’t give any details as to how you have used this message node or the use case, so all I can do is describe some general situations.
Ordinarily a message node only generates output, and the dialog will continue on to the next node without waiting for anything from the user. There are two situations where a message can wait, or appear to wait, for a user response.
One is when the message node is the last one in the dialog. The message is sent, the dialog moves to completion and since there is nothing left to do the bot just sits there. Whatever the user types next, or simulates by pressing some button from that last message, will be interpreted as primary intent identification, i.e. another dialog, FAQ, or smalltalk. For anything meaningful to happen then you need intent training to match whatever the user “types”.
This is by far the most common use case - for example, many people use a welcome dialog that only has a message node to present a short list of common tasks where the postback text is aligned with intent training.
The other, more complicated, option is to explicitly force the message node to stop and wait for user input. But to do that then you need to add subintents as connections to the message node and explicitly change the rule to wait for user input. If a message node has any of these types of rules then it will pause the flow and the subsequent user input will be evaluated against the training for those subintents. But this is a fairly specialized scenario, so is perhaps not relevant here.
Now if what the user is indicating is not at the level of an intent then what you need to use is an entity node instead. Entities can pause the execution flow of a dialog to wait for a user’s response, the entity prompt can use all the same templates as a message node. The postback text is then first interpreted with respect to the type of entity and its associated training. The connections out of the message node can test the entity for different values.
In summary, everything is text, and how that text is interpreted depends on training, and that training can exist at an intent and/or an entity level.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00631.warc.gz
|
CC-MAIN-2021-43
| 3,150
| 9
|
http://www.sqaforums.com/forums/oracle-e-test/30605-etest-flash.html
|
code
|
I'm evaluating the eTest suite for our QA group. We test several Flash applications which run from locally installed executables rather than from inside a web browser. It looks to me from the docs that eTest cannot handle Flash (or anything else) that runs outside a browser. Is that the case?
The reason I'd posed this question was mostly because I hadn't heard back from my contact at Empirix on this topic. This morning I did hear back and the answer is that eTest cannot handle anything that runs outside its browser viewer. Might not be news to anyone on this forum, but there it is for what it's worth.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720062.52/warc/CC-MAIN-20161020183840-00125-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 608
| 2
|
http://m0n0.ch/wall/list/showmsg.php?id=264/8
|
code
|
I've got a Soekris 4800 at a site that want's to add WiFi support. I've
considering attaching an external AP to the unused 3rd Ethernet interface
or installing a card in the unused mini-PCI slot. While I believe the
external AP approach will be simpler to install and cheaper (I think), I'm
wondering about the managability side of things. I've never used the AP
feature within m0n0. The WiFi will only be used to access the WAN and to
VPN into the LAN. We'll use WEP but nothing more complicated than that.
Is there a prefered approach here?
Paul Dugas, Computer Engineer Dugas Enterprises, LLC
paul at dugas dot cc phone: 404-932-1355 522 Black Canyon Park
http://dugas.cc fax: 866-751-6494 Canton, GA 30114 USA
Onsite at GDOT W.Annex 404-463-2860 x199
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423723.8/warc/CC-MAIN-20170721062230-20170721082230-00310.warc.gz
|
CC-MAIN-2017-30
| 754
| 12
|
https://brianacooper.net/
|
code
|
My name is Briana Cooper, and I’m a sophomore studying Cybersecurity at Collin College in the beautiful South West.
I love to solve problems.
–> Wondering if I might be a good fit for your company? Check out my LinkedIn profile
I like to keep myself busy; through the years, I’ve been in a band called Bandcamp where we covered awesome 90s Grunge music from Nirvana, to Stone Temple Pilots or Alice In Chains. I am currently trying to learn how to play the Guitar and from time to time I try to pick up Python Learning.
This site is host to a variety of things professional and personal; you can learn more about who I am, why I love what I study, or read my blog–and if you have any questions, you can contact me via e-mail.
Have an awesome day!
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817158.8/warc/CC-MAIN-20240417142102-20240417172102-00839.warc.gz
|
CC-MAIN-2024-18
| 754
| 6
|
http://www.genomearchitecture.com/tag/lab-life
|
code
|
The Lab Notes
The main theme of our research is to understand how gene regulation and genome organization tie in with each other. The Lab Notes are the latest headlines from the lab, featuring a collection of random thoughts and useful code snippets.
The culture of meetings varies a lot between research teams. Most labs have a team meeting and a journal club, with a wide variation in frequency, duration and topics between labs. As the principal investigator, you want good scientific discussion in the team, but this comes at a cost that we often underestimate (I found Jason Fried's TED talk why work does not happen at work very instructive about this).
Our lab itself is an ongoing experiment and through trial and error we have learned a few things about scientific discussions that are worth sharing. Our first attempt to promote communication were 5 minute micro-meetings between two people. During this time, they were supposed to explain they will do during the day, and why. The meetings were twice a week, with a rotation schedule. Even if everybody liked the idea, it turned out to be unsustainable because synchronization between the two people did not happen naturally. At the time one got a 5 minute break, the other would be in the middle of a technical experiment and so on. After skipping a few meetings, the momentum was lost and they quickly died out.
Our second attempt...
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601615.66/warc/CC-MAIN-20200121044233-20200121073233-00443.warc.gz
|
CC-MAIN-2020-05
| 1,396
| 5
|
https://graphicdesign.stackexchange.com/tags/photo-editing/new
|
code
|
Hair nor beard do not help if geometry, perspective, colorfulness nor light do not fit.
A shadow on the collar is inserted. It's the masked curves layer.
The saturation levels on the head and suit are adjusted.
The collar must be wider for thicker neck. It's warped. The long nose must be near, it cannot be in the background. Heavy spherizing is applied to ...
As a professional I have access to Photoshop, so in this answer I use that, but you could use the same approach in a free alternative like GIMP.
Here we have an image in 700 × 400 px:
Then we first scale the image down to half size, 350 × 200 px using Automatic interpolation:
With this result:
And then we scale it up to the original size using Nearest ...
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141196324.38/warc/CC-MAIN-20201129034021-20201129064021-00625.warc.gz
|
CC-MAIN-2020-50
| 721
| 9
|
https://github.com/evacchi
|
code
|
Prevent this user from interacting with your repositories and sending you notifications.
Learn more about blocking users.
You must be logged in to block users.
Contact GitHub support about this user’s behavior.
Learn more about reporting abuse.
Spacemacs tabbar-mode layer
A minimal experimental actor library for Nim (wip)
Simple Haskell implementation of IFS fractals using the Chaos Game technique
Displays your profile id as a badge on the Firefox icon
Simple example of a Java ClassLoader that forcibly reloads from disk each class instance
Hello World project with C++ Actor Framework, CMake and Conan.io package management
lots of trial and error with this one :) it required more explicit bindings than I expected.
Seeing something unexpected? Take a look at the
GitHub profile guide.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300244.42/warc/CC-MAIN-20220116210734-20220117000734-00120.warc.gz
|
CC-MAIN-2022-05
| 794
| 14
|
http://www.ign.com/boards/threads/hi-im-new-sort-of.452512779/
|
code
|
Hello people. I'm a future refugee from the Gamespy forums, which are getting shut down and absorbed into the IGN forums. Somehow I logged on to an account I created back in 2001 that only has a couple of posts... so here I am. Hooray? Let's see, I'm old, I play a lot of hockey, and my favorite game series is probably Civilization. I'm also currently playing Bioshock 2 - Minerva's Den and Draw Something.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423512.93/warc/CC-MAIN-20170720222017-20170721002017-00561.warc.gz
|
CC-MAIN-2017-30
| 407
| 1
|
http://www.jambase.com/Artists/56120/Joe-Pug/Bio
|
code
|
Joe hails from the southern streets of Maryland. He spent a few collegiate years in North Carolina but dropped out for a number of reasons (including boredom and ineptitude). Currently he resides in Chicago, where he works as a carpenter by day and a songwright by night. He's grateful to everyone listening to the songs he wrote. He apologizes to everyone living in the houses he built.
All the songs found here online were recorded by hoosier born and bred, Jeremy 'Doorface' Miller with the exception of 'Call it What You Will' recorded by Richard Sales.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064362.36/warc/CC-MAIN-20150827025424-00023-ip-10-171-96-226.ec2.internal.warc.gz
|
CC-MAIN-2015-35
| 557
| 2
|
https://discuss.phplist.org/t/404-error-message-upon-install-lists-admin-no-work/1012
|
code
|
Did everything manually. If I knew how to add phplist through softaculous I would do it that way.
In any case, I did it manually.
A: Got the lists folder to the public_html folder
B: changed the database names to the appropriate names in config file
C:made sure the directory and its permissions are on 755 while the files are on 644.
What should I check for next…Could it be an .htaccess problem?
What is $pageroot set to in your config/config.php file? By default phpList expects to find the lists folder directly “below” your domain, ie: www.domain.com/lists
If this is not the case, then you need to add $pageroot="/path/lists"; to your config.php file where path is the directory where you’ll find the lists directory/folder.
Quickest way for us to check things would be for you to give us the path to your phpList installation, or if you don’t want to share it publically, pick someone to share it via Instant Message and let a fresh pair of eyes look to see if there is anything that you’ve missed.
To send an IM, I think you have to click on their username then click on Message.
Dragon good sir, could I pm you the path. The biggest thing is that I’d hate to pay for this service if its something stupidly simple that I’m missing. But if it requires some fixing that a professional could do, then I don’t mind paying. But I"m going to be extremely pissed, lol, if its just something like a period where it shouldn’t be in the config file or something of that nature.
the pageroot is it at its default…/lists…this is a fresh installation…not sure if i mentioned that…
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945333.53/warc/CC-MAIN-20230325130029-20230325160029-00529.warc.gz
|
CC-MAIN-2023-14
| 1,603
| 12
|
http://stackoverflow.com/questions/7851879/excluding-files-with-variables-from-rewrite
|
code
|
I am looking for a rewrite condition for
.htaccess to ignore files which have GET variables, so for example, I want it to ignore
At the moment I have all files being routed to the
index.php but I want users to be able to go to pages directly if they have anything after the
? in the URI.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894931.59/warc/CC-MAIN-20140722025814-00079-ip-10-33-131-23.ec2.internal.warc.gz
|
CC-MAIN-2014-23
| 287
| 5
|
https://events.humanitix.com/bing-chat-enterprise-webinar
|
code
|
Boost your productivity and efficiency with Microsoft Copilot
The IT Team at ATI-Mirage invite you to register for a FREE Webinar to discover how you can Boost your productivity and efficiency with Microsoft Copilot. In the webinar we will focus on a business case using Microsoft 365.
Microsoft Copilot is now available via bing.com/chat and the Microsoft Edge for Business sidebar at no additional cost for Microsoft 365 users. User and business data is protected and will not leak outside the organisation. Microsoft Copilot gives you better answers, greater efficiency, and new ways to be creative.
In this introductory webinar, you will:
- Learn how Microsoft Copilot leverages AI to provide you with fast and accurate answers to your queries.
- Discover the features and benefits of Microsoft Copilot, such as natural language processing, rich results, and creative content generation.
- Find out how the Bing side bar can summarise web pages, PDFs, and other documents for you, saving you time and effort in reading and researching.
Limited slots, register yourself today!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100484.76/warc/CC-MAIN-20231203030948-20231203060948-00660.warc.gz
|
CC-MAIN-2023-50
| 1,079
| 8
|
https://niketalk.com/threads/worst-current-sports-commentator-analyst.176553/
|
code
|
- Joined Aug 27, 2008
Pick one, post the channel and sport. My vote goes to Doris Burke from NBA on ESPN. I'm not being a sexist or anything but that woman just needs to shut itor go back to sideline reporting. She never talks about anything important and her wannabe-funny jokes just suck.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400214347.17/warc/CC-MAIN-20200924065412-20200924095412-00209.warc.gz
|
CC-MAIN-2020-40
| 290
| 2
|
https://help.thecustomerfactor.com/article/79-profit-and-loss-reports
|
code
|
Reports- Profit and Loss Reports
You can access the "Reports" from the homepage by hovering your mouse on "Search" and clicking the dropdown, click "Reports".
Next, click on the "Profit and Loss" button. On the date range, you can customize the dates you want to generate the report with.
In the image, we choose the period for "Last Month". Then we hit "Search".
It will then generate the last months Profit and Loss summary. For the first page, we can see the "Income" page.
Then, you will find the details as shown below.
At the very bottom of the page, you will find the "Export", "Print Summary", "Print Details" and "Print Summary and Detail". If you wish to export the file, it will be downloaded as an excel file showing all the information you can find in this page. If you wish to print the file, you have option to print only the Summary, Details, or both.
If you have any questioins, please let me know.
Take care and have a nice day!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100525.55/warc/CC-MAIN-20231204052342-20231204082342-00853.warc.gz
|
CC-MAIN-2023-50
| 946
| 9
|
https://nugetmusthaves.com/Tag/Aforge.net?page=4
|
code
|
Top 20 NuGet aforge.net Packages
Portable Accord Math (Noncommercial) contains non-commercial licensed scientific computing functionality of the Accord.NET Framework for mobile and tablet devices.
Portable Accord Fuzzy provides fuzzy logic functionality to the Accord.NET Framework on mobile and tablet devices.
Portable Accord Genetic provides genetic programming functionality to the Accord.NET Framework on mobile and tablet devices.
Contains mathematical algorithms which can only be used for educational applications, and cannot be used in commercial products without express permission of its authors. Currently contains the non-linear Conjugate Gradient algorithm for non-linear optimization. This package is part of the Accord.NE...
The AForge.Math library contains set of math utilities, which are used by other AForge.NET framework's libraries or may be used individually. This release has been re-packaged using .NET Standard
The AForge.Imaging library contains interfaces and classes for different image processing routines and filters. Full list of features is available on the project's web site. This release has been re-packaged using .NET Standard
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00626.warc.gz
|
CC-MAIN-2021-43
| 1,164
| 7
|
https://community.qlik.com/t5/New-to-Qlik-Sense/Hide-Prefix/td-p/25952
|
code
|
Discussion board where members can get started with Qlik Sense.
I wrote the below code in the script
SQL SELECT A,
B as %B
I am getting connector error while debugging. I want to hide Column B from the script in Qliksense. How can I do it?
If you don't want to import B into qlik you can exclude it from the SQL query and then remove it from the Load section:
SQL SELECT A
Also the % is a special character that will need to be wrapped in quotes for your alias.
May be this -
Or may be -
B as "%B"
B as [%B]
I did this but did not work
I need B on some other place just want to hide it.
It did not work still I am getting general OLEDB error
This is the error I am getting.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247489343.24/warc/CC-MAIN-20190219041222-20190219063222-00505.warc.gz
|
CC-MAIN-2019-09
| 673
| 16
|
https://support.apple.com/en-al/guide/security/secdc7c6c88e/web
|
code
|
Sealed Key Protection (SKP)
On Apple devices that support Data Protection, the key encryption key (KEK) is protected (or sealed) with measurements of the software on the system, as well as being tied to the UID available only from the Secure Enclave. On a Mac with Apple silicon, the protection of the KEK is further strengthened by incorporating information about security policy on the system, because macOS supports critical security policy changes (for example, disabling secure boot or SIP) that are unsupported on other platforms. On a Mac with Apple silicon, this protection encompass FileVault keys, because FileVault is implemented using Data Protection (Class C).
The key that results from entangling the user password, long-term SKP key, and Hardware key 1 (the UID from Secure Enclave) is called the password-derived key. This key is used to protect the user keybag (on all supported platforms) and KEK (in macOS only), and then enable biometric unlock or auto unlock with other devices such as Apple Watch.
The Secure Enclave Boot Monitor captures the measurement of the Secure Enclave OS that is loaded. When the Application Processor Boot ROM measures the Image4 manifest attached to LLB, that manifest contains a measurement of all other system-paired firmware that is loaded as well. The LocalPolicy contains the core security configurations for the macOS which are loaded. The LocalPolicy also contains the
nsih field which is a hash of the macOS Image4 manifest. The macOS Image4 manifest contains measurements of all of the macOS-paired firmware and core macOS boot objects such as the Boot Kernel Collection or signed system volume (SSV) root hash.
If an attacker is able to unexpectedly change any of the above measured firmware, software, or security configuration components, it modifies the measurements stored in the hardware registers. The modification of the measurements causes the crypto-hardware-derived system measurement root key (SMRK) to derive to a different value, effectively breaking the seal on the key hierarchy. That causes the system measurement device key (SMDK) to be inaccessible, which in turn causes the KEK, and thus the data, to be inaccessible.
However, when the system isn’t under attack, it must accommodate legitimate software updates that change the firmware measurements and the
nsih field in the LocalPolicy to point at new macOS measurements. In other systems that attempt to incorporate firmware measurements but that don’t have a known good source of truth, the user is required to disable the security, update firmware, and then reenable so that a new measurement baseline can be captured. This significantly increases the risk that the attacker could tamper with firmware during a software update. The system is helped by the fact that the Image4 manifest contains all the measurements needed. The hardware that decrypts the SMDK with the SMRK when the measurements match during a normal boot can also encrypt the SMDK to a proposed future SMRK. By specifying the measurements that are expected after a software update, the hardware can encrypt an SMDK, which is accessible in a current operating system, so that it remains accessible in a future operating system. Similarly, when a customer legitimately changes their security settings in the LocalPolicy, the SMDK must be encrypted to the future SMRK based on the measurement for the LocalPolicy, which LLB computes on the next restart.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816853.44/warc/CC-MAIN-20240413211215-20240414001215-00168.warc.gz
|
CC-MAIN-2024-18
| 3,454
| 8
|
https://www.meetup.com/Winona-Spanish-Conversation-Meetup/
|
code
|
What we're about
Still Monolingual? No problem, that can be cured. Come and join our group for casual conversations in Spanish on a variety of topics. Share a happy hour appetizer and even happier memories of making new friends and networking in another language. Once we build our little community, we can plan events together like outings to a Latin Salsa club in the cities, or Brasilian churrasqueria (steak house), or learning a family recipe (hands on) from native speakers.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302355.97/warc/CC-MAIN-20220120160411-20220120190411-00464.warc.gz
|
CC-MAIN-2022-05
| 480
| 2
|
https://www.construct.net/en/forum/construct-2/closed-bugs-22/upside-down-rendering-70961
|
code
|
Sorry for no .capx.
I found something weird, when I installed my Construct 2 exported project on my Firefox OS device. Whole layout is rotated 180 degrees, but touch points are ok. For example, I touch on a sprite in left-top corner (it should show on right-bottom) and nothing happens. When I touch right-bottom corner, event attached to that sprite fires.
Could it be bug with Construct 2 or OS?
Tested on Construct 2 r152 and r155.
Project exported as Open Web App, I tried also HTML export + manually add manifest.webapp
Thanks for any help
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652494.25/warc/CC-MAIN-20230606082037-20230606112037-00127.warc.gz
|
CC-MAIN-2023-23
| 544
| 6
|
http://andreasuttle.weebly.com/blog1/post-title-click-and-type-to-edit
|
code
|
Hi, you need to be more descriptive in your post. Also delete the first text box that says - this is my blog post ...♥Ms. H
Write something about yourself. No need to be fancy, just an overview.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102469.83/warc/CC-MAIN-20231210123756-20231210153756-00809.warc.gz
|
CC-MAIN-2023-50
| 196
| 2
|
http://www.quondam.com/38/3802n.htm
|
code
|
At this point I'd rather not define 'real scale' and 'virtual scale' because I don't want to be definitive right now. I'd rather wait and see what (if anything) others think these terms might mean. I believe there is a certain obviousness to what I meant within the context of my use of the terms, i.e., the scale used to generate building documents versus the now "world wide" implications of designing electronic/digital media systems (be they computer operating systems, web sites, entertainment corporations, or even email lists, etc.).
Real scale deals primarily with physical limits and the coordinated representation/manifestation of those limits, while in virtual scale limits are 'fluid' and/or 'meandering' and/or 'oscillating' and/or 'undulating', etc..
It would seem then that the difference between real scale and virtual scale is in how each scale respectively treats and/or renders limits. Real scale and virtual scale do not treat or render different realities, however, because all reality is relative to the limit of its container.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572063.65/warc/CC-MAIN-20220814173832-20220814203832-00071.warc.gz
|
CC-MAIN-2022-33
| 1,049
| 3
|
https://www.xach.com/naggum/articles/3223977130308446@naggum.net.html
|
code
|
Subject: Re: newbie in deep over his head From: Erik Naggum <email@example.com> Date: Fri, 01 Mar 2002 13:12:05 GMT Newsgroups: comp.lang.lisp Message-ID: <firstname.lastname@example.org> * Erik Naggum > I think using #'(lambda ...) is a notational grossity. * Alain Picard | Why do you say that? Is that because you think (function (lambda ..)) | looks worse, or because that's what the (lambda ..) macro is there for? It is because I consider #' a notational grossity in general, but it is a _functional_ notational grossity. (Is "grossity" even a word? Hm, no, but it should have been, because of "curiosity".) (Also pardon the pun.) I think (function car) is notationally better than #'car, but I think 'car is notationally better than (quote car). Why? I have no idea. Some twisted sense of aesthetics or something, I guess, or maybe I am just used to ', maybe it is so small it is innocuous, whereas #' looks like Laurel and Hardy or something. Therefore, (function (lambda ...)) is better than #'(lambda ...), but since that is such a redundancy, I much prefer (lambda ...) by itself. | I'm curious, because I never thought of #'(lambda ..) as ugly. I just | get used to seeing that #' everywhere I expect a function argument. But that would obviously make it problematic to use a function that returns a function. I think of lambda as a function-constructor, whereas function is a function-getter. (function (lambda ...)) would therefore get at the function that lambda constructs. (Yes, I do know that it is function that makes the lambda form into a function.) Another function or look-alike that returns a function would be similarly unquoted. /// -- In a fight against something, the fight has value, victory has none. In a fight for something, the fight is a loss, victory merely relief.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886436.25/warc/CC-MAIN-20180116125134-20180116145134-00164.warc.gz
|
CC-MAIN-2018-05
| 1,800
| 1
|
http://blog.amnuts.com/category/javascript/
|
code
|
Yesterday I was starting a small, very stand-alone project that required a pretty dynamic interface. I decided to put together the components using ReactJS but as it was essentially just one JSX file that I had I didn’t want to have to set up a gulp file, babelrc file, and whatever new-fangled build process the young kids are in to these days.
I thought about not using JSX syntax, but I think it adds a level of readability to the code and will make it easier for myself and others to maintain longer term, and besides which, I had already writing the JSX part and didn’t want to redo the work. Thankfully, though, it’s really easy to use Babel from the command line to do the conversion,.
First, install the required npm modules (I did this globally so that I could use it anywhere on the cli, not just in the specific project):
A while ago I wrote a jquery plug-in that allows you to easily make selected elements all the same height. Well, it turns out that it could be a little better because it didn’t take in to consideration margins, padding and such. So here’s a slightly more robust version.
I’ve used TinyMCE for a while – it’s an excellent WYSIWYG editor for web applications. However, the code editor in it is really simplistic. Enter CodeMagic, a plugin that allows you to replace the standard code editor with one that used the excellent CodeMirror library for syntax highlighting. So I started to use CodeMagic but found a number of issues with it. For example, it used CodeMirror 2 and not 3, had issues with using IE and the word wrapping functionality and the window resizing was never great. But with the joy of things being on GitHub and up for a bit of a forking, that’s exactly what I did.
On creating a rather large form recently, I was in the need to have some kind of hint to the user about what format the content should take on several input boxes. I could have done this with a description under the form element, but a more accepted way to do this, it seems, is to have a ‘hint’ in the element itself. You know the kind of thing I mean; a value, usually quite a light grey colour, that is present until you click in to the form element and then is disappears. I also wanted to do this as a jQuery plug-in because, well, why not? 🙂
I came across a jQuery plug-in the other day to sort tables, and it works great and is exceptionally simple to implement (and as anyone who’s flicked through this blog knows, I like the simple things in life… Don’t need any more gray hairs popping up, you know!).
I’m pretty new to the whole jQuery game, having really only started to look at in a number of weeks ago. Until then I have been happily using Prototype and Scriptaculous, and as I knew them I didn’t see a point to move to another library. However, the more examples I saw of jQuery the more elegant I thought it was, and so now am in the process of learning it more, converting some code over and trying my hand at plug-ins. So here’s introducing my first jQuery plug-in!
This plug-in allows you to automatically append a class and title to any external resources. You can pass in multiple domains that are thought of as ‘local’, and anything else with a protocol that doesn’t belong on the passed local domains will be flagged as external. If no domains are passed then the current domain in the url will be used as-is (with the www., etc., if present).
That probably doesn’t make much sense, but I’m sure it will when using it…
Building a set of select lists that are dependant of each other can be a daunting task, but for a simple two-level list – in that what you select from one drop-down will changing what’s displayed in one or more other drop-downs – is actually quite easy thanks to Zend Frameworks and Prototype, both of which support Json.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320226.61/warc/CC-MAIN-20170624050312-20170624070312-00688.warc.gz
|
CC-MAIN-2017-26
| 3,833
| 11
|
https://quickblocks.io/docs/usecases.html
|
code
|
Access to faster, more informative data (“articulated data”) from the Ethereum blockchain opens up the possibility of a richer collection of applications than has been previously possible. Here we provide a list of potential applications that may be built on top of the QuickBlocks libraries.
Accounting / Auditing Functions¶
A Better RPC Interface (Web3.2)
- Improves on the experience of smart contract developers and aids regulators, auditors and accountants by providing improved access to the Ethereum data. May run either remotely (AWS, Digital Ocean) or locally against your own local-running Ethereum node.
|End users:||Smart contract developers, auditors/regulators, accountants|
|Notes:||When running locally, 100s times faster than existing methods; significantly improved (articulated) data fed directly into traditional databases; programmable SDK; example source code; ability to automatically generate code; code does not need to be maintained; built in support for ERC20 tokens and wallet contracts|
Smart Contract Monitoring:
- Actively monitor one (or more) Ethereum smart contracts and user accounts (or any combination) watching for odd or ‘known-dangerous’ transactional patterns. Report to anomalies to a list of email, SMS, web site, or individuals whenever something of interest happens.
|End Users:||Smart contract developers, smart contract participants (i.e. token holders)|
|Notes:||‘Weird’ things include recursive attacks, violations of invariants (token balances to ether balance), largest purchases, most active trader accounts, etc.; Could potentially spawn an “insured” smart contract industry expectation.|
Smart Contract Reporting:
- Instantaneous “Quarterly” reports available every second. On demand reports generated for cap tables (report on token holders), individual ether holdings and transaction histories (i.e. bank statements) on a per-account, per-contract-group, by industry, or system wide.
|End Users:||Smart contract developers, smart contract participants (i.e. token holders), economists, regulators|
|Notes:||Allows for self-reporting on business processes, expenditures, and revenue from outside an organization–no need to wait for company reports; marketing efforts might engender an expectation that every smart contract’s accounting is fully transparent.|
Automated Tax Returns:
- Automated tax reporting for any jurisdiction showing dates and cost of acquisitions, cost basis, holding period, dates and revenues on sales, and any tax liabilities.
|End Users:||Individual users, purveyors of smart contract systems, accountants, auditors|
|Notes:||Historical spot prices from agreed-upon source for each currency and/or token can be shared across the planet. With the addition of APIs from popular crypto exchanges (i.e. Kraken, CoinBase) could also report on exchange-held accounts|
Provide data and transactional information to third parties not associated with the development team of a smart contract system. Interesting to potential investors, industry analysts, auditors and/or regulators.
|End Users:||regulators, auditors, potential investors|
|Note:||Fully parsed data makes for much easier auditing of smart contracts, could expose non-delivery of promised behavior (i.e. are “provably true” gambling sites actually paying out at the rate they claim? Gambler Watch™).|
Testing / Debugging Aids:
- Record and play back “real world” interactions with a already deployed smart contracts. Being programmable, test engineers may use QuickBlocks to build test cases ad-hoc (fuzzing) or modify previously recorded live playbacks.
|End Users:||Smart contract testing engineers, smart contract developers|
|Notes:||Could be used against proposed new versions of the smart contract (with programmatic modification of the data to meet the new contract’s interface); could be used to aid in gas optimization|
Using computational geometry code already written, visualize ‘relatedness’ of accounts, relationship between transactions, usage trends, upward or downward movement of token transfer activity, and myriad other things.
|End Users:||Economists, data scientists interest in the system-wide behavior|
|Notes:||The strength of link could be related to the number of interactions between two account or the total value transferred; computational geometry (graph-based data structures) might be useful in sharding the chain (under the assumption that one would want interrelated accounts to belong to the same shard and that in most cases accounts are tightly coupled).|
- As Ethereum progresses and it becomes possible for smart contracts to pay for their end users’ gas usage, it will become increasingly important to optimize for gas usage in a smart contract. Existing tools report only on expected gas usage based on estimated behavior. QuickBlocks can report on actually “live” data captured in the wild. A consulting service or product ideas could be built on top of this capability.
Smart Contract Auditing
- Some smart contracts (SingularDTV as an example) do not have adequate logging / event generation built into them. This makes it nearly impossible to properly account for their behavior. Certain activity in a smart contract can be “protected” with semaphore events that can surround value transfers, for example. These ‘semaphores’ if not properly closed would indicate a recursive attack. There is an opportunity for consulting related to full instrumenting smart contracts with “active events” that aid in the monitoring function.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057496.18/warc/CC-MAIN-20210924020020-20210924050020-00103.warc.gz
|
CC-MAIN-2021-39
| 5,565
| 31
|
https://www.learnvray.com/demo/the-splotches-secret-v-ray/
|
code
|
The splotches’ secret in V-Ray
This brief lesson is to clarify part of STEP-4 (Final Settings) and these topics
could be part of final exam to take the certification.
Have you ever got a render full of splotches with V-Ray?
Splotches are the classical disturb for approximated calculations (GI with irradiance map), and grainess is the classical disturb for direct calculations (v-ray lights, glossy etc..)
So if you get splotches you already know: let’s check what’s wrong with GI! (GI = Global Illumination) – Even if Irradiance maps is set to HIGH, look how many splotches I got with this scene:
This effect is reduced if I render with my final settings. In particular, reducing the Noise Threshold (in SETTINGS) from 0,01 to 0,005 this is what I get:
The problem is reduced but not at all! Secret is coming!! :-)
Just go to Irradiance map and increase:
- Hsph Subdivs = from 50 to 100
- Interpolations sample = from 20 to 50
And here we are, this is the final result:
Please check my scene with V-Ray final settings to get a clean result:
I hope this could help you in many situations!
V-Ray Licensed Instructor
– – –
To know more about these 2 important parameters, here is the official description from Chaosgroup:
Hemispheric subdivs (HSph. subdivs) :
this controls the quality of individual GI samples. Smaller values make things faster, but may produce blotchy result. Higher values produce smoother images. This is similar to theSubdivs parameter for direct computation. Note that this is not the actual number of rays that will be traced. The actual number of rays is proportional to the square of this value and also depends on the settings in the DMC sampler rollout. (for these reason when I reduced Noise Threshold from 0,01 to 0,005 this reduced the splotches!)
Interpolation samples :
this is the number of GI samples that will be used to interpolate the indirect illumination at a given point. Larger values tend to blur the detail in GI although the result will be smoother. Smaller values produce results with more detail, but may produce blotchiness if low Hemispheric subdivs are used.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511361.38/warc/CC-MAIN-20231004052258-20231004082258-00406.warc.gz
|
CC-MAIN-2023-40
| 2,121
| 21
|
http://www.keohi.com/keohihdtv/experttips/markrejhon/markrejhon_tips.html
|
code
|
Mark Rejhon Tips
Mark Rejhon is the Home Theater Personal Computer (HTPC) pioneer and
guru. He is the father of HTPC.
Mark's work in harnessing the power and versatility of the computer combined with
advancements in software made it possible to converge pc and home theater technologies.
HTPC enthusiasts and the fast growing industry surrounding it owe him a world of
thanks. Check out Mark's other accomplishments at his site at www.marky.com.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891886.70/warc/CC-MAIN-20180123091931-20180123111931-00653.warc.gz
|
CC-MAIN-2018-05
| 445
| 7
|
http://www.tomshardware.com/forum/81976-33-finding-graphics-card-bios-menu
|
code
|
Ok, I can find almost all information in my bios menu (pressing DEL after computer starts up) including CPU temp, power usage, etc, but I cannot find my graphics card. Can somebody please tell me under what section the graphicscard info is kept. I have an ASUS motherboard if that makes any difference.
What options are you looking for exactly? The bios will likely only have agp bus speed, agp voltage, agp apature. Your video card make/model info won't be listed anywhere in the bios. And you can't access the video card's bios from the system bios if you are wanting to change it's clock speed or something.
Ok, here's my dillema, I just took out my graphics card (GeForce 4) out of my computer and replaced it with a another one (GeForce 2) [Gave the GForce 4 to a friend). Now when I start up the computer, Windows doesn't even load. I get a message in the dos start up, saying that a config file in corrupst. When I try to reformat I get the same message. I had this problem before when I switched my CD-RW and all I had to do is go into the BIOS menu and manually set the new CD-RW. I wonder how I can tell my computer that there is a new graphics card in there? Does this make any sense?
If you haven't gotten it fixed yet, try to boot up in safe mode, then remove the video card drivers and reboot to install the new ones. The problem may be that it's looking for a GF4, and it can't find it.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982935910.69/warc/CC-MAIN-20160823200855-00212-ip-10-153-172-175.ec2.internal.warc.gz
|
CC-MAIN-2016-36
| 1,401
| 4
|
https://www.giantbomb.com/forums/general-discussion-30/broken-psp-question-1108/
|
code
|
I got a used PSP (fat one) a week ago for $40 from a friend. It worked, I went on vacation, came back and now when I go to turn it on the nothing happens at all. When I plug in the charger The power light flashes orange once and then I get nothing, Now it's not the battery because I tried it in my slim PSP and it works fine. Do these PSP's like to just die randomly like that? I guess thats what I get for planing on making it a custom firmware psp. At least I'll be able to make my money back on ebay.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607702.58/warc/CC-MAIN-20170523202350-20170523222350-00352.warc.gz
|
CC-MAIN-2017-22
| 504
| 1
|
http://www.justanswer.com/business-finance-homework/9wrb0-need-help-macro-economics-questions.html
|
code
|
Hi, I should be able to help you with this. Can you give me some more information?
Hi from just answer. I'm PDtax. Since your first expert opted out, I can assist.
Can you post the list of questions and your deadline? I can then provide a price and commit to assist.
I'm a little reluctant to take your phone call without looking at your project first.
You can send it to my email: (Edited by Moderator) I'll look at it right now, but since you want this by 5:59 pm (EST I assume), I'll see if I can assist.
I still haven't gotten any file of your questions.
Please provide data and the maximum time you may give. You may upload the file at www.mediafire.com or www.wikisend.com and provide me the link, thanks.
How many hours are remaining in your deadline? as well as let me know that are these your assignment questions? or practice questions?
Thanks for the reply, but tell me the remaining time in number of hours as I am from different time zone as well as are these your assignment questions?
Sorry, cannot answer, thanks.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720973.64/warc/CC-MAIN-20161020183840-00202-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 1,029
| 10
|
https://community.notepad-plus-plus.org/topic/18083/c-plugin
|
code
|
I want to write a C# plugin, but the template I found was quite outdated and didn’t load. It wants me to download a target for .Net 2. Just so there is not confusion, it wants the .Net Framework 2.0, which is over a decade old, as opposed to .NET Core 2.0, which is relatively current, though .Net Core 3.0 released recently.
Is there an updated project template for C# for maybe .NET Framework 4.x (preferably 4.8 the current latest or even better .NET standard 2.1)?
@Jared-A-Barneck - what’s your plugin idea? Is it for C# or just going to be written in C# and provide some other capability?
It isn’t for C#, but just going to be written in C#. I am a C# developer, though I can code in php, java, python, are various other languages, I would love to code in the language I use daily.
I have a few ideas for plugins, such as section finder in np++. https://github.com/rhyous/SectionFinder, which is already written in C#.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510924.74/warc/CC-MAIN-20231001173415-20231001203415-00353.warc.gz
|
CC-MAIN-2023-40
| 930
| 5
|
https://neo-adventtec.com/dont-have-a-devops-team-this-is-why-youre-wrong/
|
code
|
Automating Software Delivery
We are now entering the realm of continuous integration (CI) and continuous delivery/deployment (CD) at the heart of DevOps. I will speak about them both separately below.
Technically speaking, CI is not part of DevOps but a technique that is part of agile software development (although DevOps engineers can contribute, for example, by automating the running of static analysis or unit tests as part of a CI pipeline). CI essentially means that developers quickly and often commit their changes to the main code branch.
In the past, developers often weeks spent or months working separately on different features. When the time to release the software came, they would need to merge all their changes. Usually, the differences would be substantial and lead to the dreaded “big bang merge,” where teams of developers would sometimes spend days trying to make each other’s code work together.
The main advantage of CI is that it avoids individual pieces of work diverting too much and becoming difficult to merge. If a CI pipeline is created with unit tests, static analysis, and other such checks, it allows for quick feedback to developers. It thus lets them fix issues before they cause further damage or prevent other developers from working.
CD can be considered part of DevOps and builds on CI. A CD pipeline automates software delivery by automatically building software whenever changes are committed to a code repository and making the artifacts available in the form of a software release. When the pipeline stops at this stage, we call it “Continuous Delivery.” Additionally, a CD pipeline can automatically deploy artifacts, in which case it is called “Continuous Deployment.”
In the past, building and deploying software were typically manual processes, tasks that were time-consuming and prone to errors.
The main advantage of CD is that it automatically builds deliverables using a sanitized (and thus entirely controlled) environment, thus freeing up valuable time for engineers to work on more productive endeavors. Of course, the ability to automatically deploy software is attractive too, but this may be one step outside the comfort zone for some engineers and managers. CD pipelines can also include high-level tests, such as integration tests, functional and non-functional tests, etc.
Automating Software Security
This sub-branch of DevOps is sometimes called DevSecOps. Its goal is to automate security and best software development and delivery practices. Also, it makes it easier to comply with security standards and produce and retain the evidence required to prove adherence to such measures.
Often, in software development, security is an afterthought, something that has to be done at some point but is often left to the last moment when there is no time to do it properly. Developers are under pressure to perform and deliver within timeframes that can typically be very tight. Introducing a DevSecOps team may thus be a positive contribution. It will establish which security aspects must be met and use various tools to enforce those requirements.
DevSecOps can be at all levels of the software lifecycle, for example:
- Static analysis of code
- Automatic running of tests
- Vulnerability scanning of the produced artifacts
- Threat detection (and possible automated mitigation) when the software is running
- Automatically checking that specific security standards are followed
DevOps is often tasked with ensuring that a given system is highly available, which is achieved using load balancers, application meshes, and other tools that automatically detect failed instances and take remedial action. Autoscaling is also an important aspect and is often implemented as an automated process by DevOps engineers.
The key to all of this is that the whole system must be designed so that each of its components is ephemeral. In this way, any component can instantly be replaced by a new, healthy one, rendering a self-healing system. Designing such a system is usually not the remit of developers but that of the DevOps team.
Traditionally, organizations used snowflake servers running monolithic software stacks, with everything on that single server. Such a design is very fragile, with everyone living in fear of the next breakdown and engineers on duty 24/7. Admittedly, you also need engineers on duty in an automated system, just in case, but they would typically seldom be used.
Various tools let you automate the configuration of servers and systems and the provisioning of infrastructure elements (networks, databases, servers, containers). Examples of these are configuration management and infrastructure-as-code (IaC) tools.
Leveraging these, you can ensure that an exact mirror of a given system can be instantly instantiated at a button’s press. They also let you deploy new software versions or keep the configuration of servers or serverless services up to date.
IaC often integrates with CD. Indeed, one of the final stages of a CD pipeline can be deploying a software release in a production environment using IaC.
When to Avoid DevOps Practices
Compared to traditional, manual software development, DevOps practices require much work upfront. This initial investment usually pays for itself over the long term; This is probably a wrong business decision if your project is short-lived.
So, in any situation where you want to achieve “good enough” software that won’t be used in production, blindly applying DevOps practices isn’t likely a great idea and will only increase your development time for little added benefit. Typical examples include:
- Minimum viable product
- Proof of concept
In any of the above cases, moving to a production-ready product would usually require re-writing the software from scratch, in which case the DevOps practices can then be planned as part of the overall effort.
The most recurring word in the DevOps world is “automation,” as you probably noticed in this article. As a DevOps engineer, my motto is: “If you can’t reproduce it, you don’t own it.”
Compared to traditional development, DevOps usually requires more work upfront to establish the automation patterns. After this initial period, developers’ productivity is improved, and the effort needed by the operations team is significantly reduced.
Perhaps, you have also noticed that I didn’t mention the cloud. This is intentional because DevOps practices apply to both cloud and on-premises environments. However, in the case of cloud-based workloads, DevOps practices are pretty much mandatory for software teams today. This is because manually provisioning and managing cloud resources is cumbersome and prone to human error. Many aspects of cloud engineering are also intrinsically tied to DevOps practices.
In conclusion, it is fair to assume that unless you’re rushing to develop a minimum viable product, a DevOps team will allow you to structure your workloads more efficiently for both your developers and your operations team and make both groups happier. Remember: “DevOps” is a philosophy that encompasses both your development and operations teams, so “just” introducing a DevOps team won’t be enough. It would help if you implemented the necessary cultural changes across your company to make it—and your cloud environment—work.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00690.warc.gz
|
CC-MAIN-2022-33
| 7,368
| 33
|
https://thumpertalk.com/forums/topic/99440-gearing-question/
|
code
|
I find on most of the stuff I ride, 2nd gear revs too much and 3rd gear in a little sluggish. Would you recommend a gear change? If so, what should I change it to. For additional info, someday I would like to put a dual-sport kit on it, so I don't know if I should gear down if i want to ride on the street. Thanks.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690016.68/warc/CC-MAIN-20170924115333-20170924135333-00644.warc.gz
|
CC-MAIN-2017-39
| 315
| 1
|
https://in.mathworks.com/matlabcentral/answers/78785-int16-and-fixed-point-transformation
|
code
|
Hi All! I am trying to build a process wich take place in a DSP . I want to evaluate with matlab.
I get uint16 data from the ADC. this data schould be filtered. the Filter works with fixed point (fract16).
should a transformation take place? How to do it?
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358702.43/warc/CC-MAIN-20211129074202-20211129104202-00132.warc.gz
|
CC-MAIN-2021-49
| 255
| 3
|
https://sunnygolovine.com/blog/
|
code
|
Read the latest posts from my personal blog
A few tips and tricks to keeping dependencies in your JS project fresh
I've always wanted to have an old school guestbook for my site. I finally built one using Netlify Functions and Github Gist.
With most of the site up, in this post I want to talk about landing pages and the my blogging workflow
In my last post I talked about some of the decisions I made when building my site. In this post I'll talk about the architecture
I recently rebuilt my personal website. In this part I'll talk about some of the decisions I made.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991737.39/warc/CC-MAIN-20210514025740-20210514055740-00301.warc.gz
|
CC-MAIN-2021-21
| 570
| 6
|
https://sourceforge.net/directory/natlanguage:japanese/natlanguage:esperanto/license:publicdomain/
|
code
|
The ease and simplicity of DigitalOcean gives developers more time to build and innovate for their customers.
Thousands of businesses and developers around the world use DigitalOcean to easily deploy, manage, and scale applications of any size with less infrastructure friction. Go from one to a thousand virtual servers in seconds. Also, we continue to add hundreds of in-depth tutorials to our documentation library and have an active online community to get the support you need to succeed.
Your next-gen toolset for MySQL database environments
If you organization takes advantage of the cost-effective, flexible MySQL open source database platform, then you need a toolset that supports your commitment to open source relational databases. Toad Edge for MySQL has what you need and helps you ramp up on MySQL quickly, ensuring faster time to value.
The future of information technology will be based on controlling the flow of natural light. This project is an attempt to establish the code (or software) that will enable this to happen. It involves rewriting an OS from the ground up.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864626.37/warc/CC-MAIN-20180522053839-20180522073839-00345.warc.gz
|
CC-MAIN-2018-22
| 1,089
| 5
|
https://forum.matomo.org/t/phpbb3-tag-problem/875
|
code
|
First of all excuse me for my poor english i’m french. I have a problem with the piwik tag on my phpbb3 board. The tag is on the overall_header file and it works ok with Firefox, my board is centered on the page. But with Internet Explorer, my board is on the left if you click on a category link and then it return on the center if you click on a topic link… I tried to copy the tag on the overall_footer but the problem is still the same.
Just try my board here and you’ll understand the problem : http://www.whitneyconnected.com/forum
I hope you will help me, thank you… style_emoticons/<#EMO_DIR#>/wink.gif
Yes, I see it is doing that with IE. Was this doing this before the script was added? I ask because IE is flaky like that. I’ve checked it in Firefox, Opera, and IE. Since everything is working fine in FF and Opera, I will assume its an issue with your CSS and IE.
Can you wrap the … inside an ?
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476180.67/warc/CC-MAIN-20240303011622-20240303041622-00211.warc.gz
|
CC-MAIN-2024-10
| 918
| 5
|
https://community.asterisk.org/t/penalty-in-queues-this-is-working/50098
|
code
|
I want to change the penalty spot in the queue.
The task is simple: first caller with a penalty 1, then 1 and 2 with a penalty, and then all abonents.
There is a test queue. Queues.conf
musicclass = ny
strategy = ringall
announce-frequency = 0
ringinuse = yes
joinempty = unavailable
monitor-format = wav
monitor-type = MixMonitor
periodic-announce = recordings/please-wait-for-operator
member => SIP/292,2
member => SIP/226,1
member => SIP/610,2
member => SIP/231,3
Part dialplan. extensions.conf
exten => s,n(push),Queue(test,t,,,7)
exten => s,n,Set(QUEUE_MIN_PENALTY=1)
exten => s,n,Set(QUEUE_MAX_PENALTY=3)
exten => s,n,Goto(s,push)
According to my reasons in the second set, must ringing all abonents, but ringing only abonents with penalty 1.
QUEUE_MAX_PENALTY=0 also does not work
Help me please, what am I doing wrong?
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201699.38/warc/CC-MAIN-20200921112601-20200921142601-00334.warc.gz
|
CC-MAIN-2020-40
| 826
| 23
|
http://discuss.affectiva.com/t/corrupted-analysis-e-g-face-lost-video-too-dark/444
|
code
|
We understand that the Affectiva scores are % level of confidence for the given metric.
It works nicely, but becomes problematic if the video is not fit for analysis for example no face shown…
What are the methods for optimising scores for these cases?
Is there a clear indicator for inadequate reading?
Should we use a combination of other metrics, indicating no face or low light etc.? What would be these indicators?
Thank you for your help
(C++, ubuntu, 4.0)
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515564.94/warc/CC-MAIN-20181023002817-20181023024317-00182.warc.gz
|
CC-MAIN-2018-43
| 464
| 7
|
http://www.coderanch.com/t/613008/Tomcat/ojdbc-tomcat-jdbc-tomcat-dbcp
|
code
|
Not sure if this should be posted in the JDBC forum.
Running tcServer (VMwares Tomcat) using ojdbc6.jar and am getting "maximum open cursors exceeded" error. Already tried cleaning up the code, but still have the problem. Googling says another possible solution is to use tomcat-jdbc.jar.
That is what confuses me. I don't see any driver class in tomcat-jdbc, and most of the information about this talks of connection pooling. Are these two jars covering different funcationlity? Is there any over lap? Am I not understanding a more basic some thing here?
Please ignore post, I have no idea what I am talking about.
The tomcat-jdbc.jar component appears to contain Tomcat 7's database connection pooling implementation. You didn't point to the Google answer that recommended it, but I'll be willing to give good odds that what it was was an overly-technical way of recommending the use of connection pooling instead of brute-force driver management.
General background info for any and all who might not be aware of it:
There is a certain amount of overhead involved in opening a database connection. Rather than go to all that work, a system that makes many frequent requests for database services should therefore consider using a connection pool, where a finite number of connections are already kept open waiting for use.
The connection pool mechanism does not actually return the Connection object itself. What it does is provide a façade object that mostly mirrors the functionality of the underlying Connection object (and implements the Connection interface). The primary difference between this façade and a raw Connection is that the façade object's close() method does not close the physical database linkage, it merely returns the object to the connection pool that it came from.
Connection pools work best when you obtain the connection, do as much work as quickly as possible as you can, and then close (release back to the pool). In J2EE, you should never attempt to hold a Connection object (pooled or not) between requests, since Connection is an Interface, not a persistent class, and so not only would you tie up the Connection far, far longer than needed, but any attempt to serialize it might fail.
An IDE is no substitute for an Intelligent Developer.
That helps. I was confused, thinking that tomcat-jdbc.jar was a replacement for ojdbc6.jar. So ojdbc6.jar contains the oracle db device drivers for java 6. While tomcat-jdbc.jar, and tomcat-dbcp.jar, contain the connection pooling code.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447913.86/warc/CC-MAIN-20151124205407-00264-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 2,514
| 11
|
https://github.com/tomjohnson1492?tab=overview&from=2016-12-01&to=2016-12-31
|
code
|
Report or block tomjohnson1492
Contact Support about this user's behavior.Report abuse
A Jekyll-based theme designed for documentation and help systems. See the link for detailed instructions on setting up and configuring everything.
blog for I'd Rather Be Writing
slides for keynote on innovation for tcworld india
Archiving the previous version of the Jekyll documentation theme here. This model contains separate outputs for each product.
Forked from PharkMillups/beautiful-docs
Pointers to useful, well-written, and otherwise beautiful documentation.
sample files for java
910 contributions in 2016
- tomjohnson1492/documentation-theme-jekyll-next-version CSS • Built by
- tomjohnson1492/gittest Built by
- tomjohnson1492/docs Python
I reviewed the existing permalink documentation and made some updates to improve the clarity and accuracy. There are several pages that I updated i…
I believe this to be a bug, not a question about using Jekyll.
I updated to the latest Jekyll (or) if on GitHub Pages to the latest
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00164-ip-10-171-10-70.ec2.internal.warc.gz
|
CC-MAIN-2017-04
| 1,022
| 16
|
http://lonesomegamer.com/collection/navajo-wars/
|
code
|
I haven’t played this one yet, so I don’t know much about it. It’s a pure solitaire game. The player controls the fate of the Navajo people from the 16th to the 19th century struggling for their land and survival. The game covers the conflicts with the Spanish, the Mexicans and the English. The board is beautiful and the components are of great quality. The game is quite complex though, and I didn’t manage to get into it yet.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737178.6/warc/CC-MAIN-20200807113613-20200807143613-00568.warc.gz
|
CC-MAIN-2020-34
| 437
| 1
|
https://indyworldwrestling.com/ec3-vs-tommy-dreamer-tables-match-hardcore-justice-2014-impact-wrestling-full-matches/
|
code
|
EC3 may have bitten off more than he can chew when he’s forced to take on “The Innovator of Violence” Tommy Dreamer in a TABLES MATCH!
Watch IMPACT! every Friday night at 10 PM ET on Twitch: http://twitch.tv/IMPACTwrestling
Start your GWN 30-day free trial NOW: https://globalwrestlingnetwork.com/YouTube
SUBSCRIBE for more IMPACT: http://impac.tw/1lw4fTg
Subscribe to the Global Wrestling Network for more than 1000 hours of classic and current IMPACT Wrestling matches and events! Begin your 30-day free trial now (only available to new subscribers): https://globalwrestlingnetwork.com/YouTube
IMPACT Wrestling is moving to Pursuit Channel! Beginning on Friday, January 11, IMPACT! will air weekly on Friday nights at 10:00 p.m. ET. To find out how to watch, visit https://pursuitchannel.com.
You can also watch live every Friday night at 10 PM ET on Twitch! http://twitch.tv/IMPACTwrestling
Subscribe for the latest IMPACT Wrestling highlights, full matches, classic IMPACT and TNA moments, theme songs and amazing backstage interviews featuring the stars of IMPACT!
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474533.12/warc/CC-MAIN-20240224112548-20240224142548-00849.warc.gz
|
CC-MAIN-2024-10
| 1,075
| 8
|
https://forum.polygon.technology/t/pos-chain-downtime-and-heimdall-issue/2324/4
|
code
|
Update: Temporary hotfix deployed to resume the Polygon PoS chain
See the new forum post with the details here.
- Polygon PoS users will likely experience downtime starting at 5:50 PM UTC on March 10th due to an issue with the Heimdall implementation.
- Note that all user funds and data remain fully secure.
- The Polygon team is actively working on a solution and will share an update on ETA as soon as we can.
The Polygon PoS team is currently looking into an issue with the Heimdall implementation which is used by one of the two layers of the Polygon PoS chain. This means the network is currently experiencing downtime which began at 5:50 PM UTC on March 10th.
Here are some further technical details:
- Due to a recent upgrade, the Heimdall node, which is part of the dual-node architecture of Polygon PoS, has halted
- All user funds and data are absolutely safe
- While we’re working on identifying the definitive cause, it seems to have originated from an earlier upgrade consisting of a minor parameterization fix to the Ethereum to Polygon PoS state sync/bridging module
- As part of the earlier upgrade, no consensus/state modules were supposed to be affected, the change what is technically called a
side-handleris for the state sync mechanism
- Although still under investigation by the team, we suspect, there may have been a bug in the upgrade which affected consensus, and caused different Heimdall validators to be on different versions of the chain, thereby not reaching 2/3 consensus. When using Tendermint consensus, this situation will cause the Heimdall chain to halt
- Note that Heimdall does not handle user transactions. It is used for validator related transactions and bridging
- The Bor chain relies on Heimdall for block proposer committee selection, specifically span creation. So once the last span that Heimdall specifies passes, the Bor chain also halts.
- At that time, the Bor chain (or the user-facing Polygon PoS chain) will then only be halted, and while there is a liveness issue, there are no security or safety issues. Meaning that while there will be downtime, state and funds are not affected.
We’re currently working on identifying the definitive causes and preparing mitigation solutions that will resume operation as soon as possible. More details will be shared soon.
Please rest assured these issues are of the highest priority for the Polygon PoS team and we’ll keep you updated as soon as we resolve the issues. You can also follow us on our Twitter handle at 0xPolygonDevs for updates.
Thanks for your support,
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710777.20/warc/CC-MAIN-20221130225142-20221201015142-00475.warc.gz
|
CC-MAIN-2022-49
| 2,569
| 19
|