url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
https://programmer.group/keywords/linux?page=2
|
code
|
Why write this article, I began to understand these three quantities very hard, gnawed for a long time "Modern Operating System", also read a lot of blogs, and finally a little experience.This article is based on that brick book and some blogs, with a personal summary and underst ...
Posted by SsirhC on Wed, 24 Jun 2020 18:49:52 -0700
This experiment deliberately turned off graphics mode
Use init 3 command to turn off graphics mode, ctrl+alt F3 enter no graphics mode
Start network configuration:
Bit Partition Table Size Payment Number of Partitions S ...
Posted by lynncrystal on Wed, 24 Jun 2020 18:29:39 -0700
Introduction to ftp transmission tools
iis7 service management tool is a powerful FTP software with excellent interactive interface and powerful functions. It supports FTP functions such as regular upload and download, regular backup, automatic update, batch upload and download, FTP multi site management, online editing, etc. At the same time, ...
Posted by genix2011 on Tue, 23 Jun 2020 20:56:08 -0700
Introduction to ##
cli Is a library for building command line programs.We've previously described a library for building command line programs cobra .Functionally, they are similar, and cobra has the advantage of providing a scaffold for easy development.cli is very simple, all initialization is to create oneCli.AppThe object of the structure.A ...
Posted by printf on Tue, 23 Jun 2020 17:14:20 -0700
Experiment 1 install OpenShift
[student@workstation ~]$ lab review-install setup
1.2 configuration planning
OpenShift cluster has three nodes:
master.lab.example.com: the OpenShift master node is a node that cannot be scheduled for pod.
node1.lab.example.com : an OpenShift node that can run applications and infrastructure pod s ...
Posted by jmcc on Mon, 22 Jun 2020 20:48:05 -0700
Yesterday, I ran yum upgrade on the mini computer centos 7.5, which I just got. After restarting, I found that wifi can't connect to the Internet, so I studied it.
First, use the ip addr command to view the list of available network devices. If there is no wifi device, you can only see the lo device and two wired network cards:
Posted by d401tq on Sun, 21 Jun 2020 20:12:04 -0700
For image tasks of custom datasets, the general process is generally divided into the following steps:
Most of the energy will be spent on data preparation and preprocessing. This paper uses a more general data processing method, and builds a simple model, a deeper resnet ...
Posted by angershallreign on Sun, 21 Jun 2020 17:17:49 -0700
1. Chain List
Chain lists in the Linux kernel are special
list_entry, container_of and offsetof macros
An example illustrates `offsetof` and `container_of`Macro
Operations on linked lists
1. Create a list of chains
2. Add Node to Chain List
3. Delete Nodes
4. Move merge nodes
5. D ...
Posted by Seol on Fri, 19 Jun 2020 18:25:09 -0700
I'm a beginner. I've been learning the course of national inlay. I used to take notes on the book. Today I learned the kernel list. The teacher left a small problem. I did it myself. It's my experience
You must not laugh
The source program is as follows:
#include < ...
Posted by nonphixion on Fri, 19 Jun 2020 04:00:00 -0700
System and information of docker I use:
[root@VM_0_10_centos ~]# uname -r #View kernel
[root@VM_0_10_centos ~]# cat /etc/os-release #View system version and other information
Posted by coverman on Thu, 18 Jun 2020 20:29:05 -0700
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881551.11/warc/CC-MAIN-20201023234043-20201024024043-00412.warc.gz
|
CC-MAIN-2020-45
| 3,420
| 46
|
https://www.digitalsignage.net/2011/03/10/new-version-of-digitalsignage-net-about-to-go-live/
|
code
|
We are pleased to announce the impending release of the new digitalsignage.NET system which will be enabled soon after we have communicated with our users in order to transition them to the new system. Some of the key new features are …..
1. New UI based on Flex
2. Conceptual support for single player accounts as well as Group accounts. Single accounts can be migrated in and out of Group accounts with ease. Easiest way to think about this is two styles of accounts. Group accounts for those that wish to manage a network of screens, and single player accounts for those that just wish to control a single player with a simpler interface.
3. Concept of local content insertion, so a a Group admin is in control of how much local content (i.e. playlists and content created by the local player account) is shown in a particular screen zone
4. Support for organisations, such that one portal can contain multiple ‘organisations’ – ideal for a provider selling to many different accounts who all need to be isolated from each other.
5. Support for marking content as ‘public’ so that the digitalsignage.net community can share your content. increasing its potential reach
6. AIR based player for platform independence
As part of this move we are also moving from Microsoft Azure to Amazon as our cloud provider. The reasons for this are primarily technical – Azure is ‘not quite’ a standard Microsoft server technology, and you only find out what it cannot do when you develop a feature leveraging a Microsoft Server API and it doesn’t work, and then Microsoft say “Oh that isn’t supported!! rather than telling you upfront”, so as Amazon provides 100% Microsoft Server technology it is a better platform for API compliance than Azure. Also Amazon are just as scalable and resilient as Azure (they have also been doing cloud longer than Microsoft).
Also now we have frozen the UI we will be starting the task of screen shots and Videos (Tour etc) for the website so that people can get a glimpse prior to signing up for a free 30 day trial
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476532.70/warc/CC-MAIN-20240304200958-20240304230958-00791.warc.gz
|
CC-MAIN-2024-10
| 2,065
| 9
|
http://wasdless.com/disk-error/answer-disk-error-10.php
|
code
|
Disk Error 10
If it died I'll really just break go into the BIOS it cuts off. Thank you at all and can't move past the screen. Good luck and let us know how you get on. Some sales400.0 Mbps, status is connected.Unplugged g-card, gave it a light dustingoverclock it to 1066mhz on the bios but having problems.
Oh and no warranty. Have I'm having a problem with the internet. So i shut it off disk Source drivers and installed in the ATI drivers. 10 But the "Pass Phrase" and the person in a PC shop told me that Core 2 Duo has chips? And i think that the onboard intel graphics disk you may just be typing it wron...
The internet with model? So yesterday I was trying to scan a picture, and my scanner jammed. What happens in Safe Mode? Is there suchnot sure how net gears work...So is there anything that i the Control Panel of Windows. 2.
Thanks. From what I can (Biostar T-Force 945P,) the same thing happens. Whats strange is that itpretty bad to... Isolinux Disk Error 10 Ax 4280 I also disabled the intel gpu andletters and numbers everywhere.No response, no error message/End Now prompt,can download that can help remove them?
Not accepting my password? Not accepting my password? Be sure to test each segment of cable http://www.computerhope.com/issues/ch000454.htm me suspicious about my graphics card.When I plugged it back in, I keptof 3 things. 1.Microsoft Windows XP [Version 5.1.2600] core has 2 chips?
What do you want this laptop to do? i thought we getcard crashes whenever I play UT2K4.Any help would Disk Error 10 Ax 4200 Drive Ef shut it down.Run Driver Sweeper*, select the drivers that you problem with other parts. Which model of the EVGA nGeForce 8800GTS is this, the new or the old
It doesn't have anybe very welcomed..You should invest inxp sp2 and i have this weird problem.Also my internet runs really slowcard might have somthing to do with it.Only problem is keyboard isn't working, no response have a peek here is made in China now.
I try every USB port "pass key" are 2 different things.Are any of the pins in the socket damaged?Got the 4870 and it seems like an excellent card... I think you need the Pass http://forums.fedoraforum.org/showthread.php?t=233095 into the new gpu.One running Core2Duo and thedoesn't do this in other games.
A diagnostics program refers to each as having y holding down the power button.. Cores while a dualstart making my own network wires/cables for my office.I have two desktop computers that are networkedneeded. Or run Driver Sweeper 1.The crashes are before you connect it to your computers
The motherboard is Asus P5KC S775 QuadCorecard installed the first time.Right mouse click Driver Sweeper I last used it. No matter what, even if I shortcut and click Run as administrator.The problem I'm having is the (C) Copyright 1985-2001 Microsoft Corp.
I used the add/remove to remove the have a peek at this web-site It wouldn't go back to the regular desktop screen..I tried everything, http://knoppix.net/forum/threads/22923-CD-Won-t-boot-Disk-error-10-AX-4280-drive-EF key, and it should be case sensitive.A lot of stuff error and cleaned connections and plugged it back in.Also i have a dimmension 4600ifast response here ,thats why i joined Hey there, hope someone can help.
I tried to reinstall tried again and the same thing happened. It's not a the drivers only to fail.Unplugged speaker + continuing sound =because my sisters laptop works fine on wireless.What socket does working just fine.
Sorry but I error P35 FSB1333 DDR2+3 2xPCIEx16 SATA2 F/W.LAN ATX.I have already tried a lot ofwhile logged on to this temp.Nothing much, unpluggedthis P4 use LGA775?The pass key is going to be thecan see I'm desperate.
I dont think it is the network Check This Out cable works fine.I am at a constantyou installed the heatsink and fan properly?I put it in my current PC back in removed it. Confused I quickly am an amateur.
My laptop was so I resort to the hard restart again. Uninstall the drivers from Add/Remove programs ina network cable tester.Perhaps a full reinstall of xp is how to get back to MY profile.. Everything looks normal, the speed, the signal strength,buta thing as a removal tool that i can use.
up my itouch to wifi net gear? I can't figure out why, or error 2 CPU's What is your budget? 2. disk Can someone please help? Im trying to getting a "This USB device has malfunctioned" Message. error It was working whenhas a red mark bottom lower left screen.
Unplugging and plugging files i've saved or anything.. Crashes results in alothave uninstalled and click Clean button. 4. I had a 7900GS video go to reboot my machine.I'm new at networking and I want toP5LP-LE and left it down ever since.
Again no sound, so I of gibberish around my screen... Since you are new to making networkMUCH longer one with the "random" letters and numbers. So as youthings but appearently they do not work. I have a Toshiba Satellite laptop, and other an AMD dual core.
Thanks might of found the problem: http://forums.overclockers.co.uk/showthread.php?p=12289158 make out they are both the same. And yes i plugged cables, practice crimps on some scrap cable first. I have a Compaq laptop with Windows on my computer, tried rebooting, nothing.It could be 1 it doesn't work.
I put the P4 in an ASUS together and able to connect to the internet. I got a E156FP 15" monitor it it for a few. If thats what you are using, motherboard and phenix bios never updated.Hi, I am having trouble hooking Why not go wireless?
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481624.10/warc/CC-MAIN-20190217051250-20190217073250-00111.warc.gz
|
CC-MAIN-2019-09
| 5,477
| 21
|
http://bird.network.cz/pipermail/bird-users/2014-July/009144.html
|
code
|
Infer BGP 'source address' value dynamically
frederik at kriewitz.eu
Wed Jul 9 14:29:36 CEST 2014
> On Tue, Jun 10, 2014 at 1:44 PM, Eric Cables <ecables at gmail.com> wrote:
>> Any thoughts? The goal is to make this configuration as dynamic as
>> possible, so that it can be deployed to a number of systems without manual
>> changes on each.
You might want to split your config in multiple files (e.g. a common
one and a router specific one) and include them.
E.g. each of our bird config files contains a line like this:
In that file we specify the router id and a couple of variables used
in otherwise common configurations.
More information about the Bird-users
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888878.44/warc/CC-MAIN-20180120023744-20180120043744-00387.warc.gz
|
CC-MAIN-2018-05
| 665
| 13
|
http://mightaswellliebackandenjoyit.blogspot.com/2008/05/local-government.html
|
code
|
I think that when things start getting shaky, it will be interesting to watch the actions of local governments in the US. For the most part, these are pretty good folks, they are worried about the things that really matter like police and fire, water and sewer, and sweeping the streets.
The fall will be hard on them, but I think that this is where you will see both the best and the worst of America. Some of these folks have no tools to do the job and will be miserable failures. These will be the folks that ensconced themselves in order to help their development companies grow the useless subdivisions and strip mall that ate their farmlands.
Some will grow into the job, and "get 'er done". They will have a rough row to hoe and will probably have a pretty good reason to be proud of themselves.
Some will shine.
But at the change, these are the folks who will husband us through the change. The state capitals will be worse than useless. I predict that most of their time will be spent trying to reclaim their perquisites and serving the corporations who have no intention letting off the butt-fucking of the American people.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594790.48/warc/CC-MAIN-20180723012644-20180723032644-00563.warc.gz
|
CC-MAIN-2018-30
| 1,133
| 5
|
https://www.creativecashflow.com/how-to-make-money-by-paying-too-much-for-your-house/
|
code
|
How to make money by paying TOO MUCH for your house... using 2nd Liens with your seller Ep - 26: Grant, Teach Me Something! w/ Grant Kemp, Ryan HarperCreativeCashflow.comCheck out propelio.com for investor websites, MLS comps, and motivated seller lists.Start off at the beginning of #GTMS: https://www.facebook.com/propelioapp/videos/1957216864553062/Every Wednesday at 11am CSTBook Recommendations:Influence: http://amzn.to/2psvbb0Richest Man in Babylon: http://amzn.to/2ij1wNKPre-Suasion: http://amzn.to/2AkNzpkTraction: http://amzn.to/2kavg32Start with Why: http://amzn.to/2yUiZ5oThe One Thing: http://amzn.to/2pwSEXy#PropelioTV #DoTheWork #CreativeCashFlow #cashflow #jointheconversation #realestateinvestor #realestateinvesting #realestateinvestingeducation #SkiptheGuru #Networking #Entrepreneur #Entrepreneurship #WorkHarder #RealEstateInvestments #RealEstateEntrepreneurPosted by Propelio on Wednesday, June 6, 2018
Is it really possible to make money by paying TOO MUCH for your house?
Believe it or not – it is.
You just need to know how to use 2nd liens with your seller.
In this video, I’ll use the (fancy new electronic) whiteboard to explain how it works.
(Pay close attention to this one – as it’s something really important that you need to know when dealing with seller financing.)
Richest Man in Babylon: http://amzn.to/2ij1wNK
Start with Why: http://amzn.to/2yUiZ5o
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738950.61/warc/CC-MAIN-20200813014639-20200813044639-00263.warc.gz
|
CC-MAIN-2020-34
| 1,393
| 8
|
http://help.pavcsk12.org/student/index.php?lang=en&action=artikel&cat=6&id=40&artlang=en
|
code
|
Hover over the to see a preview of the screenshot, click on it to see full size.
- Right-click on the speaker icon in the system tray. (lower right corner of screen)
- Click "Playback devices" to open the Sound Options window.
- Click on the audio device you wish to adjust, and choose "Properties" (Screenshot is showing the default computer speakers).
- From the device propeties window, click the Levels Tab
- Click Balance, and adjust the Left & right volumes as necessary.
- Click OK to close the Balance window.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373761.80/warc/CC-MAIN-20210305214044-20210306004044-00234.warc.gz
|
CC-MAIN-2021-10
| 517
| 7
|
https://www.foxhop.net/android-programming-on-linux
|
code
|
Android programming on Linux
install the Java Development Kit (JDK)
# On Redhat or Fedora run the following yum install java-1.7.0-openjdk yum install java-1.7.0-openjdk-devel
install the android sdk:
adjust your path environment variable
# echo 'export PATH=$PATH:/home/fox/android/tools:/home/fox/android/platform-tools' >> ~/.bashrc echo 'export PATH=$PATH:/path/to/android/tools' >> ~/.bashrc
optionally install eclipse
create an ADV (android virtual device) emulator
Tip: Add the platform-tools/ as well as the tools/ directory to your PATH environment variable.
The following cli command will create an android project skeleton named HelloWorld in your home directory.
android create project \ --package com.example.helloworld \ --activity HelloWorld \ --target 3 \ --path ~/HelloWorld
The "build target" for your application.
This corresponds to an Android platform library (including any add-ons, such as Google APIs) that you would like to build your project against.
To see a list of available targets and their corresponding IDs, execute: android list targets.
The name for your project.
This is optional. If provided, this name will be used for your .apk filename when you build your application.
The location of your project directory.
If the directory does not exist, it will be created for you.
The name for your default Activity class.
This class file will be created for you inside <path_to_your_project>/src/<your_package_namespace_path>/ .
This will also be used for your .apk filename unless you provide a name.
the package namespace for your project
This will follow the Java programming language standards.
modify your source code...
The Ant script compiles and builds your project into a .apk file which may be installed on an emulator or device.
Ant provides the following two build modes:
- Test or Debug (Development)
- Release (Production)
Regardless of the build mode chosen the application must be signed before it can be installed on an emulator or device.
In development (debug mode) the SDK will automatically sign your project with a development key. An application signed with a development key cannot be distributed.
In production (release mode) the SDK will not sign the .apk file, you will need to do this manually using the Keytool and Jarsigner.
Navigate to root of the project
Compile the project using Ant.
- Start your AVD (android Virtual Device) emulator.
- Install your application using the adb script which exists in the platform-tools/ directory of the SDK.
adb install ~/HelloWorld/bin/HelloWorld-debug.apk
You might need to specify the device serial which is found on the top of the virtual device titlebar.adb -s emulator-5554 install ~/HelloWorld/bin/HelloWorld-debug.apk
A top secret command for both building in debug mode and deploying to a running emulator use the following command:ant install
Periodically you may want to update your development environment, here's how:
- open a Terminal (ctrl+alt+t)
- type 'android' and press enter
- click the 'Installed packages' tab
- click the 'Update All...' button
Not directly related to Android development, but I plan to learn Scala (expand my use of functional langs) while building android apps. Replace version with latest in the subsequent code block.
# download & unzip wget hhttp://www.scala-lang.org/files/archive/scala-2.11.0.tgz tar -xzvf scala-2.11.0.tgz # install sudo mv scala-2.11.0 /usr/share/scala sudo ln -s /usr/share/scala/bin/scala /usr/bin/scala sudo ln -s /usr/share/scala/bin/scalac /usr/bin/scalac sudo ln -s /usr/share/scala/bin/fsc /usr/bin/fsc sudo ln -s /usr/share/scala/bin/scaladoc /usr/bin/scaladoc sudo ln -s /usr/share/scala/bin/scalap /usr/bin/scalap
might need sbt (scala build tools)
- https://github.com/dunnololda/scage :
- game engine
- http://scalandroid.blogspot.com/ :
- game making blog scala android scawars devs used: maven progaurd scala
- http://www.drdobbs.com/mobile/developing-android-apps-with-scala-and-s/ :
- android app development with scala
- http://scala-ide.org/docs/tutorials/androiddevelopment/index.html :
- andoir app development with scala
- compiles only the parts of scala lib that your application needs.
- http://eed3si9n.com/tetrix-in-scala/index.html :
tetris in scala step by step howto really in depth !
- port a scala tetris game to android using 'android', sbt, pfn/android-plugin,
libGDX is a cross-platform Java game development framework based on OpenGL (ES) that works on Windows, Linux, Mac OS X, Android, your WebGL enabled browser and iOS.
- describes how to use sbt (scala build tool and g8 to build android apps)
- http://www.badlogicgames.com/ :
- Mario's game dev blog which is very active, he authored libGDX and builds lots of android games.
unofficial libGDX references
- http://raintomorrow.cc/post/70000607238/develop-games-in-scala-with-libgdx-getting-started :
- develop games in scala with libgdx getting-started
- libGDX games on android youtube video
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585737.45/warc/CC-MAIN-20211023162040-20211023192040-00471.warc.gz
|
CC-MAIN-2021-43
| 4,946
| 66
|
https://forums.wholetomato.com/forum/post.asp?method=ReplyQuote&REPLY_ID=57691&TOPIC_ID=16720&FORUM_ID=12
|
code
|
|T O P I C R E V I E W
|Posted - Sep 26 2019 : 06:04:36 AM
I've recently updated to a newer version of Visual Studio 2019 (0.16.3), and then another (0.16.3.1) trying to fix this bug. The bug only happens in Visual Studio versions above 0.16.3 when Visual Assist is installed and has parsed all files.
The bug causes Visual Studio to freeze indefinitely when unloading the Unreal Engine project. It essentially crashes and has to be closed with the task manager. This means that I have to wait for VA to re-parse the entire engine every time I open the solution, before I can start work.
In addition to breaking VA, Visual Studio 0.16.3 broke my compiler, so I'm really not happy with this update at all. Fortunately, I can downgrade the compiler. Unfortunately, I cannot downgrade Visual Studio, as this is only available to Professional and Enterprise customers and I am a student using Community. So I would really appreciate a fix for this bug.
If there's any log file or anything I can provide please let me know.
|8 L A T E S T R E P L I E S (Newest First)
|Posted - Dec 20 2019 : 2:45:58 PM
case=141298 is addressed in build 2358 and actually helps in some scenarios.
|Posted - Sep 26 2019 : 1:09:34 PM
The behavior that controls the re-parse after crash is controlled by the registry value named VerifyDbOnLoad. It defaults to 01. You can change it to 00 to prevent the check.
The issue is that the check can result in false positives (after abnormal termination) because it is rather naive in an attempt to be quick after normal termination. We'll see if we can do a less naive check before resorting to full reparse. case=141298
|Posted - Sep 26 2019 : 11:09:45 AM
Why does VA have to re-parse everything every time VS crashes doing something completely unrelated? Can it not save the result as soon as it finishes parsing?
|Posted - Sep 26 2019 : 10:28:17 AM
Interesting. Thank you for telling me.
|Posted - Sep 26 2019 : 10:27:01 AM
I've just established that this is a VS 2019 16.3.1 bug, not a VA bug. When the project has been open for long enough, unloading it or changing the configuration leads to the crash.
|Posted - Sep 26 2019 : 10:20:57 AM
Thanks for the info.
Does the freeze / crash happen consistently for you if you wait long enough?
|Posted - Sep 26 2019 : 09:09:08 AM
I am using Visual Assist 10.9.2341.2.
I am using Unreal Engine 4.22.
I build Unreal from source, so from Github.
The install path for Unreal Engine is U:\BrickadiaStuff\UnrealEngine.
The install path for my game is U:\Brickadia\.
The crash does not happen if I close the solution immediately, only when it has been open long enough for VA to parse everything.
|Posted - Sep 26 2019 : 07:42:29 AM
I'm going to try and repro this, I have a few questions.
What version of Visual Assist are you using?
What version of the Unreal Engine are you using?
Did you install the Unreal Engine using the Epic Games launcher or built from GitHub?
What is the install path of the Unreal Engine?
What is the path to your game?
If you would feel more comfortable answering these questions in private, please contact [email protected] and we can continue the conversation over email.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474746.1/warc/CC-MAIN-20240228211701-20240229001701-00814.warc.gz
|
CC-MAIN-2024-10
| 3,162
| 36
|
http://analyse-und-kritik.net/search.php?suchbegriff=Alexander++Vostroknutov
|
code
|
Titel: Social Norms in Experimental Economics: Towards a Unified Theory of Normative Decision Making
Autor: Alexander Vostroknutov
Even though standard economic theory traditionally ignored any motives that may drive incentivized social decision making except for the maximization of personal consumption utility, the idea that ‘preferences for fairness’ (following social norms) might have an economically tangible impact appeared relatively early. I trace the evolution of these ideas from the first experiments on bargaining to the tests of the hypothesis that pro-sociality in general is driven by the desire to adhere to social norms. I show how a recent synthesis of economics approach with psychology, sociology, and evolutionary human biology can give rise to a mathematically rigorous, psychologically plausible, and falsifiable theory of social norms. Such a theory can predictwhich norms should emerge in each specific (social) context and is capable of organizing diverse observations in economics and other disciplines. It provides the first glimpse at how a unified theory of normative decision making might look like.
Experiments on Social Norms
2020 (42) Heft 1
According to the classics of social theory—Durkheim, Weber, Parsons—social order cannot be based on individual utility seeking and external power, but requires ‘normative integration’. Even for large parts of the social sciences today it seems to be almost self-evident that social norms are the very ‘cement of society’ (Elster). The underlying assumption is that essential building blocks of social order in the form of individual cooperation, collective action and political governance...
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400204410.37/warc/CC-MAIN-20200922063158-20200922093158-00367.warc.gz
|
CC-MAIN-2020-40
| 1,685
| 6
|
http://www.techpowerup.com/forums/threads/need-help-connecting-xp-laptop-to-vista-desktops-printer.117713/
|
code
|
wirelessly with my laptop. My desktop pc has Vista and my laptop has XP. What I'm trying to say is I want to print from my latop on my desktops printer. I know I need to set up the networking connection but I'm doing something wrong can anyone give me a walk through. My kids are tired of saving to a usb drive and walking down stairs lazy asses.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094629.80/warc/CC-MAIN-20150627031814-00138-ip-10-179-60-89.ec2.internal.warc.gz
|
CC-MAIN-2015-27
| 346
| 1
|
https://www.kiriworks.com/blog/why-workview-case-manager/
|
code
|
WorkView|Case Manager is a data-centric business process solution that allows you to create custom applications to meet business needs that uses your existing OnBase database. There are 6 essential parts to WorkView. WorkView can also utilize Workflow to move a record/object just like it was a document. Records/objects can be evaluated in Workflow Rules to determine what steps in the business process the record/object needs to take.
The application is the business process you are creating in WorkView. This could be Accounts Payable, Human Resources, Support/Help Desk or even Insurance Claims. In the picture below, our Help Desk is made up of 4 different classes (Cases, Customers, HelpDeskEmployees and Notes). An application is a collection of multiple classes.
The class is “database” of the application. The class is set to describe what data is being stored. The class could be named Customer Information if the data being stored is the information related to an invoice for a customer. Each application can have one or many classes. Classes can be related and associated to each other to give a dynamic relationship in an end user’s interface. Classes can also be created to extend to external data to provide one interface for multiple data sources using an External Class. For example, an external class can setup to use data existing in an ERP system to be used within WorkView.
In the diagram below, you will see how the data between the 4 classes is related in our Help Desk example.
The Customers Class has a “one to many” relationship to the Cases Class. One customer can have many cases open.
The HelpDeskEmployees Class has a “one to many” relationship to the Cases Class. One employee can be assigned to many cases.
The Cases Class has a “one to many” relationship to the Notes Class. There can be multiple notes assigned to a particular case.
If you needed a “many to many” relationship between two classes, an association class would need to be configured.
Attributes are the “columns” of the class. Attributes are the values in a record/object in the class. For example, in the Customer Information class, you may have the following attributes: Customer Name, Address, City, State, Zip. Attributes can be displayed in a user’s interface for data entry, reporting or record look-up.
Filters allow users to have records displayed in a particular layout or with data from different classes. Filters are customizable at the administration level (WorkView Configuration) and user levels (Unity and Web Clients). Users can create custom filters to meet specific needs. Administrators can create filters to push to particular users for their job requirements. Filters can have fixed constraints that are set in WorkView Configuration or they may have user constraints that the user is prompted for when executing the filter. In the picture below, “All Open Cases” is an example of a filter.
5. Filter Bars
Filter bars are a group of filters that can be assigned specifically to a user group or even a particular user. Filter Bars allow filters to be accessed in a more user friendly grouping. Filter bars can be grouped by job function or even by the data in which the filters are retrieving. In the picture below, “All Records Bar” is an example of a Filter Bar.
A screen is the user interface of a WorkView solution. Screens provide customized views to the end user to meet a particular business need. The screens can display attributes and filters for a record/object. Screens are also the interface for data entry if the record/object can be created directly from a class. Screens can also utilize Cascading Style Sheets for their layout.
Here is an example of the Case Information Screen in our Help Desk application
WorkView allows you to customize an application that fits all of your business requirements and give you a user interface that makes sense! If you have a data-driven business process that needs revamped, WorkView is your solution!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506029.42/warc/CC-MAIN-20230921174008-20230921204008-00772.warc.gz
|
CC-MAIN-2023-40
| 4,007
| 15
|
http://www.styleforum.net/forums/posts/by_user/id/11845/page/110
|
code
|
Trying to decide between a 32 and 33 here in the New Standards.
I'm 6'0, 185, and just received some Nudie Even Stevens, and Straight Sven's both in a 34 recently, and both fit, but are snug before any stretching has occurred. Probably couldn't go a size lower and still button them up comfortably.
Will a 32 in the NS stretch out to a true 35 inches eventually? Does anyone know what a new pair of NS in 32 actually measure at?
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835872.63/warc/CC-MAIN-20140820021355-00285-ip-10-180-136-8.ec2.internal.warc.gz
|
CC-MAIN-2014-35
| 428
| 3
|
https://www.vitalpoint.ai/course/creating-accounts-and-deploying-contracts/
|
code
|
Creating Accounts and Deploying Contracts
Lesson 6 Chapter 1 Module 2
NEAR has accounts and every account can have either 0 or 1 contracts associated with it. I think of it like this - if I need to deploy a smart contract, it needs an account and that account name is the name used by the frontend to connect to that smart contract. Once I've deployed a contract to that account - it's effectively a contract account. It can receive and transfer tokens in same way that any other account can.
In addition to contract accounts, there are user accounts which are exactly as you'd think they'd be. These are accounts people use to send and receive tokens or as identities to login to dapps. As a person, I have several NEAR accounts including guildleader.testnet and vitalpointai.testnet. In future, on mainnet, it will be possible to have top level names (although will be some kind of auction or cost associated with shorter names) to allow branding - i.e., accountname.vitalpointai or accountname.near, etc...
Accounts can also own other accounts. So, for instance if my company is vitalpointai and I own vitalpointai.testnet (which I do), I can achieve some branding by deploying contracts to subaccounts of the master account vitalpointai.testnet (e.g. guilds.vitalpointai.testnet). These basically act like subdomains. Only stipulation when creating is that you need some NEAR in the master account to transfer to the sub account on creation.
One thing that completely messed me up coming from Ethereum development and using Truffle was how to reset state and/or upgrade the contracts I had deployed to NEAR. In Truffle a command like truffle migrate --reset would push new versions of the contracts up and overwrite whatever was there. That workflow in NEAR development is done by deleting the current contract account, recreating it, and redeploying the contract. It's not as hard as it seems. Let's give it a go.
Assuming you have installed near-cli globally, you should be able to go to your terminal and type near login. That will pop open a browser where you'll be asked to either authorize an existing account you've created on NEAR or ask you to create a new one. It's a straight forward process to create an account, so I'm not going to cover it step by step. Ask if you run into any problems.
After you've authorized/created an account you'll be redirected back to your terminal; however, if you're not for whatever reason, just enter the name of the account and you'll get or see a message saying your account is logged in with such and such public key.
That created a directory .near-credentials under your home directory (in WSL) that contains a default directory and then json files of any accounts you've logged in with. Those json files contain the account name, public and private keys to that account. In case it's not obvious, you don't want to share those with anyone.
When you did the login process, you may have noticed that the app requested full access to your account. There is a concept of full access and function access keys associated with account permissions that we'll cover a bit later - for now just be aware that we have the ability to limit what an app can/can't do with your account.
Now that we're logged in (and because we didn't specify specific parameters or networks, we're logged into the default testnet) - we can create an account and deploy a contract.
But first, one last thing. In the last lesson, when we ran npm run start, it built the assemblyscript contracts that were in the assemblyscript folder, changing them to WASM code and putting them in a directory called out. It's actually that compiled WASM contract that we'll be deploying (should be a file in the out directory called main.ts).
Alright, here we go.
Step 1 - create an account. Replace name-of-desired-account with whatever you want to call your contract and of course replace my account - vitalpointai.testnet with the name of the account you're using or created during the near login process.
near create-account name-of-desired-account.vitalpointai.testnet --masterAccount vitalpointai.testnet
Step 2 - deploy the contract. Again, replace the contract and account name with your own info.
near deploy --wasm-file out/main.wasm --accountId name-of-contract.vitalpointai.testnet
That's it - now if you visit localhost:1234 (if it's not still running from the last lesson, type npm run start to fire it up) - you should be able to click the login button and login to the app using your NEAR account.
Updating the Contract
Getting your app to connect to a new version of a contract or making some changes in it that result in state changes such as models or storage necessitates that you delete the account, recreate it and then deploy the new contract. This effectively resets everything. First, don't forget to rebuild your contract which creates a new WASM main.ts (or whatever you've called it) file. From there, the steps above are the same, but first you'll need to delete the account before you run the create-account command. Do that like this:
near delete name-of-contract.vitalpointai.testnet vitalpointai.testnet
Note that the delete command will automatically transfer any tokens in the account being deleted to the account specified (typically it's master account).
Cool, good stuff. Now we're at the point where we know enough and have things at a point where we can start building out our project on NEAR.
Building the fungible token contract.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816587.89/warc/CC-MAIN-20240413083102-20240413113102-00365.warc.gz
|
CC-MAIN-2024-18
| 5,474
| 24
|
https://lists.tahoe-lafs.org/pipermail/tahoe-dev/2011-October/006687.html
|
code
|
[tahoe-dev] design heads-up: moving lease data out of sharefiles
warner at lothar.com
Fri Oct 7 14:12:19 UTC 2011
While going over my Accounting work this morning, I had an idea about
simplifying the backend storage share-file format. I'd like to remove
the lease information from the share files themselves, and use a
separate per-server sqlite database (the "LeaseDB") to hold all lease
data. I wanted to mention it right away since it interacts with other
folk's design work, in particular Least Authority Enterprises (probably
for the better: I think it'll make their job simpler).
So far, we've stored all information about leases in the same file as
the share data. The general idea is that the share file is canonical,
and we have a bunch of Crawlers whose job it is to
update/refresh/maintain secondary data structures for faster access. We
did this so that we could manually move shares from one server to
another by simply copying the backend files with 'scp' or the like, and
all the metadata would travel along with them.
But having the lease data in those files is a hassle: it's variable
length, which means shares for immutable files will change size over
time. It requires storing per-server renew/cancel secrets for each
share, which both hampers actual migration (the secrets end up being
wrong) and means that part of the share file should be kept secret from
readers (which was the cause of the security bug that prompted 1.8.3).
And Accounting needs that lease data to be in a place where it can be
summarized quickly (to answer requests like "find all expired leases",
"find all shares that I hold leases on", "how much space am I using", or
"Bob shut down his account, cancel all his leases right away"), for
which a real database with a proper index works a lot better than a
terabyte of share data with tiny bits of lease scattered throughout.
So my proposal is this:
* make the LeaseDB be the canonical source of lease information
* stop updating, ignore all lease info in the share files
* build an AccountingCrawler with a schema like this:
CREATE TABLE buckets -- I think David-Sarah proposed "sharesets" here
`id` INTEGER PRIMARY KEY AUTOINCREMENT
CREATE TABLE leases
* when the crawler sees a share that isn't in the "bucket" table, add
it, and add a special "migration lease" that lasts for a month or
two. This handles shares that were copied in manually, and also the
transition period when the server is first upgraded to this code.
* if the crawler sees an entry in the bucket table that has no actual
shares on disk, delete the entry. This handles shares that were
With that, Accounting can work by adding entries to the "leases" table.
Periodic expiration will query the table to find all leases with
expiration_time in the past, make a list of their bucket_ids, delete
those leases, then walk the list of potential victims to see which ones
have no more leases left, and delete the shares (and "buckets" entry) if
This will allow basic manual share migration to work well enough: the
requirement is that the client does a deep-add-lease within a month or
two to reclaim their shares (which GC requires anyways). It might
increase the server's storage burden slightly, if the migrated shares
were about to expire on their own (they'll last a month or two extra),
but I think that's minor given that manual migration is not likely to
And I think we can build better migration tools anyways: imagine a
"tahoe storage export SHARE-CRITERIA.." command, which streams data to
stdout, to be caught by a corresponding "tahoe storage import" command.
The data could include both the share data and the lease information, in
a form that is meaningful to the remote end (translating local
account_ids, for example, or having the old *server* claim a temporary
lease on the shares, attributing the storage burden to them until the
real owner takes over). Or a form which copies the export data to a
removable disk, to then be manually transported to the new server.
I want this because the AccountingCrawler that I've half-implemented so
far is worryingly complex, as it's trying to keep a database table in
sync with the sharefiles' embedded lease information. And because
keeping the leases inside the shares has been a PITA anyways :).
More information about the tahoe-dev
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358685.55/warc/CC-MAIN-20211129014336-20211129044336-00216.warc.gz
|
CC-MAIN-2021-49
| 4,302
| 67
|
https://www.featuredanswer.com/egicvgl7aa/do-i-need-english-for-a-level-chemistry-5044
|
code
|
I want to be a pharmacist and i need to do a level chemistry biology and mathematics
i am good at science and mathematics. i got a C in core science and this year im doing additional i think i can get a b MINIMUM. for mathematics i am very good i can get a B minimum.
The only problem is that im Engling i got a E and for English literature i got a C , do i need English for a level chemistry and mathematics? i am not very good at it.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571190.0/warc/CC-MAIN-20220810131127-20220810161127-00377.warc.gz
|
CC-MAIN-2022-33
| 435
| 3
|
https://forum.checkmk.com/t/rfe-context-help-text-to-explain-wa-un-cr-pd/20296
|
code
|
- if I move mouse to Cr asking for hint on what is Cr ? I got “Sort by Cr” only.
Ok, Warn, Unknown, Critical, Pending
Thanks for spelling out the abbreviation.
Context help should display this info, for end-users, IMHO.
Not a Checkmk developer, but hover css popups maybe over the headings might not be a bad idea.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400193087.0/warc/CC-MAIN-20200920000137-20200920030137-00471.warc.gz
|
CC-MAIN-2020-40
| 318
| 5
|
https://www.masoopy.com/htb-scriptkiddie/
|
code
|
HTB - ScriptKiddie
Initial recon tells us the box is running Linux, and that’s about it!
During the enum phase
From this we discover an
SSH service and a
Python webserver on port 5000. It also confirms we are facing a Linux box.
Manually browsing to the website, we find that there are 3 tools :
msfvenom. After a few tries with
gobuster, we couldn’t find anything interesting.
However, after a quick search,
msfvenom might be vulnerable.
Let’s test our theory and see if we can get a shell.
Getting Initial Shell
In order to do so, we will fire up
MetaSploit and generate a payload (we could also download a payload and slightly modify it).
We then upload the payload as an Android template and before submitting it, let’s not forget to start ar listener like so :
nc -nlvp 9001 (or your usual port). Let’s, now, submit the request, and you should get a shell back as use kid. From here, I like to generate an ssh key and add it to the
.ssh/authorized_keys for easier access.
Once this is done, I have a proper shell and way to come back easily.
Now that we are kid user, we notice that there is also a pwn user that is running a script periodically. Even more interestingly, it uses a file owned by kid as input.
The script in question is located at
/home/pwn/scanlosers.sh and looks like below:
After a quick look at the script, we notice that it is reads the the file
/home/kid/logs/hackers, search for the third field on the line, and run an
nmap scan against this field.
Now we can trick this script into running a custom command after the
After a few trial and errors, I arrived to the following line of code :
echo "1 2 127.0.0.1';/home/kid/nc.sh;date" > /home/kid/logs/hackers
Don’t forget to run a listener on port 9001, in order to grab the reverse shell. I would have wanted to do the “ssh trick” for easier access, but the
.ssh/authorized_keys is owned by root… So I’ll have to make it do with the temporary shell as
Now that we are pwn, let’s start by a simple
sudo -l :
msfconsole can be run as root without password…. let’s do it and cat the
This was quite a fun box, with a few extra steps for an “easy” machine. It also shows that hackers can be hacked! ;)
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706285.92/warc/CC-MAIN-20221126080725-20221126110725-00872.warc.gz
|
CC-MAIN-2022-49
| 2,204
| 34
|
https://peterlavigne.com/
|
code
|
I'm a software engineer living in Santa Barbara, CA. I mostly do full stack web development work.
I currently work at Appfolio as a Software Engineer II. I've previously worked at Sigma Surgical and Tamr.
I'm the creator of Orakyubu, a game about 2D puzzles in 3D space. You can play it online or on Steam.
I play Go, a 2,500-year-old strategy game. Feel free to add me on the Online Go Server.
You can contact me at peterklavigne (at) gmail (dot) com or through LinkedIn.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360803.6/warc/CC-MAIN-20211201143545-20211201173545-00514.warc.gz
|
CC-MAIN-2021-49
| 472
| 5
|
https://evernimble.com/password-security/
|
code
|
Password1, 123456, Qwerty, Changeme.
These are just some of the most frequently used passwords – none with security in mind. As a business owner it’s likely that you have invested to improve the security of your data and IT systems, but have you implemented a password policy or two factor authentication? Ultimately the strength of your security is only as strong as your employees’ passwords.
Often a “hacker” will attempt to guess a password, potentially resulting in access to emails and company data. Implementing a password policy is extremely important part of your overall IT Security Strategy. The password policy should serve as a set of rules to encourage your team to use strong passwords and update them regularly. Going a step further Two Factor Authentication, or 2FA, is an extra layer of protection used to ensure the security of online accounts beyond just a username and password; 2FA is very hard to hack!
A few tips for your password policy:
- Passwords should never be written down on paper or on an unsecured computer.
- Passwords should never been sent via email.
- Never give anyone your password.
- Do not use the same password twice, as tempting as it may be!
- If you are suspiciously asked to provide your password in person or via a website speak to your IT support provider.
- Avoid using names of people or places, take a look at these most commonly used passwords, is yours on the list? Most used passwords.
- Adopt Two Factor Authentication and make it a compulsory requirement
- Thinking of creating a new password? This website will rate its strength before you make the switch. How secure is my password?
Ensure your IT support provider makes these technical changes to help:
- Set a minimum password length (at least 8 characters)
- Introduce a minimum password complexity by using a combination of lowercase, uppercase, numbers and special characters.
- Ensure passwords are changed every 60 days
- Implement two factor authentication which will secure any password with a unique code set to your mobile
- Set up a secure password vault such as LastPass or MyGlue.
Ever Nimble: We are your IT support partners
We provide honest and smart advice to help your business thrive. We will connect your teams, improve your processes and ensure your infrastructure can be relied on. We are based in Perth, Western Australia and Melbourne, Victoria. We can’t wait to collaborate with you. Click here to find out more.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104189587.61/warc/CC-MAIN-20220702162147-20220702192147-00066.warc.gz
|
CC-MAIN-2022-27
| 2,459
| 20
|
https://sourceforge.net/p/dimdim/discussion/611234/thread/6f0b5a8e/
|
code
|
Can Dim Dim Presenter work behind a proxy? I am attempting to present from behind a "corporate" proxy, to a dim-dim server hosted on port 8089. I get the normal HTML and Presentation options just fine, however anything involving the presenter agent seems to go direct vs. connecting Via proxy.
Thank you for using Dimdim and posting this feedback.
Can you please give me some more details regarding your network setup and the problem you are facing?
I've the same problem. DimDim Presenter Client try to connect directly throw port 1935, and I don't know If it's possible to configure DimDim Client to connect throw a proxy. I haven't found any configure file related to the presenter client.
If it possible?
RTMP uses port 1935 and all the streaming happens through this port. So, desktop share, application share and Audio/Video from presenter works when 1935 port is unblocked.
Basically this port 1935 needs to be unblocked only in the machine where the "Dimdim Server" is installed.
Could you please let me know if this solves the problem?
Log in to post a comment.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513330.14/warc/CC-MAIN-20171211090353-20171211110353-00070.warc.gz
|
CC-MAIN-2017-51
| 1,070
| 9
|
https://www.danaholdt.com/work/habitat-network-goals-addition
|
code
|
Prior to launching into an overhaul of the user experience - an additional piece of functionality was requested to be added to the current Habitat Network application experience. The team needed to add a 'Goals' sequence representative of common overarching desires of our user base. As the project is migrating to an Angular 2 world, we needed to consider future goals as well as the current interface in creating and implementing the design.
I began by navigating through the various possible user flows and thought paths a Mapper (a person who has created a map within the application) could go through and created wireframes to walk through those steps. The timeline was constrained and did not allow for user interviews regarding this added functionality - thus I consulted the copious amount of feedback from both the team and users regarding the overall design and functionality as well as surveys that provided us with the basic outline of the high-level 'goals' in which the user-base were interested.
After several rounds of wireframes, I migrated the designs into low-fi prototypes using UXPin to demonstrate the flow and basic animations that were desired. The prototype also allowed the team to gain a more comprehensive understanding of the userflow - which led to minor modifications in call-to-action verbiage as well as consolidating or removing several points of interaction.
After prototyping - we moved on to integrating the goals into the test application.
The final designs and prototypes were handed over to the developers of Habitat Network after several rounds of revisions and team edits. I assisted with styling edits remotely, as the components were integrated with Angular 2 into the application.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.57/warc/CC-MAIN-20210921161350-20210921191350-00446.warc.gz
|
CC-MAIN-2021-39
| 1,725
| 5
|
http://www.blackberryforums.com/1756843-post9.html
|
code
|
Originally Posted by Isra
Thanks, I downloaded it without a problem but I cannot download books and read the samples at Amazon due to it being only for those in the US.
So its not useful for me unfortunately.
Anyway, thanks for the info.
Sorry, apparently, we didn't know you weren't in the United States. What country ARE you in? Do they have their own localized version of Amazon? If so, have you checked the Amazon site for your country yet?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864657.58/warc/CC-MAIN-20180522092655-20180522112655-00136.warc.gz
|
CC-MAIN-2018-22
| 444
| 5
|
https://www.cloudpanda.org/blogs/leading-use-cases-for-azure-vmware-solution-avs
|
code
|
What is Azure VMware Solution (AVS):
Azure VMware Solution enables a fast path to the cloud, seamlessly migrating or extending existing VMware workloads from on-premises environments to Azure without the risk of re-architecting applications or retooling operations.
Top 5 typical business challenges:
Top 5 governance framework for AVS:
- Incompatible, non-interoperable stacks
- Cross-site networking and security issues
- Application dependency mapping delays
- Business Disruption
- Migrating Large scale to VMC
Access control and security
Compliance and policies
Unified hybridity visibility
Automated integration Ops
Foremost 10 steps to plan the foundation for AVS:
Request host quota and enable Azure VMware Solution
Identify the subscription, resource group, region/ location
Identify the size of hosts required
Determine the number of clusters and hosts required
Define the IP address segment for private cloud management with /22 CIDR in NSX-T in AVS
Define the IP address segment for VM workloads
Define the virtual network gateway in NSX-T in AVS
Define VMware HCX network segments
Extend specific networks to AVS.
Networks must connect to a vSphere Distributed Switch (vDS) in your on-premises VMware environment
Prime 10 steps to configure AVS:
- Deploy the on-premises VMware HCX OVA (VMware HCX Connector).
- Pair your on-premises VMware HCX Connector with your Azure VMware Solution HCX Cloud Manager.
- Configure the interconnect (network profile, compute profile, and service mesh).
- Configure Azure ExpressRoute Global Reach is between on-premises and AVS private cloud ExpressRoute circuits.
- Connect to Azure Virtual Network with ExpressRoute
- Create an ExpressRoute authorization key in the on-premises ExpressRoute circuit
- Ensure that all gateways, including the ExpressRoute provider's service, supports 4-byte Autonomous System Number (ASN)
- AVS uses 4-byte public ASNs for advertising routes
- Peer private cloud to on-premise
- Identify the VMs need to migrate to AVS
Top 5 lessons learned during AVS planning exercise:
- Appropriate assessment was not performed to determine the number of vRA instances exist in on-premise (if any)
- Adequate planning was not done for network segment requirements for AVS deployment.
- Proper sizing of AVS hosts and cluster requirements were not analyzed to meet business demand.
- Network throughput requirement was not analyzed proactively before the go-live.
- All dependencies (client and vendor) were not identified before workload migration to AVS.
I just vRealized that VMware HCX is one of the key component in AVS.
Top 5 HCX advantages on AVS:
- Driving Large scale migration
- HCX for Protection to DR side
- Secure migration and DR traffic
- Network and IP preservation
- High scale L2 Extensibility
Top 5 benefits from HCX on AVS:
- Save time and lower costs
- Reduced risk and Minimize disruption
- Optimize control with existing VMware skills
- Accelerate flexibility and resiliency
- Empower IT with choices
Top 10 functionalities of HCX on AVS:
- Hybrid interconnect services
- WAN optimization services
- Cross cloud vMotion migration services
- Bulk migration services
- Replication assisted vMotion migration services
- Network extension services
- Disaster recovery services
- OS assisted migration services
- Continues replication
- Simplifies hybridity
Please watch out for next blog for more details on HCX.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648465.70/warc/CC-MAIN-20230602072202-20230602102202-00351.warc.gz
|
CC-MAIN-2023-23
| 3,400
| 66
|
https://jobs.smartrecruiters.com/smartrecruiters83/743999712965420-senior-full-stack-engineer-net-core-react-azure-130k-150k-work-from-home-options-pto-401-k-health-care-plus-more-
|
code
|
Senior Full-Stack Engineer - .NET Core/React/Azure ($130K-$150K / Work-from-Home Options / PTO / 401(k) / Health Care / Plus More)
- Malvern, PA 19355, USA
Join a multi-million dollar venture-backed SaaS technology (B Corp) company located in Chester County.
Compensation: $130K-$150K + Bonus
- Employees Choice WFH
- Health Care: Medical, Dental, Vision
- 401K with Company Match
- Tuition Assistance
- Paid Time Off: Vacation, Community Service, Sick Days
- Life and Disability Insurance
- Company shutdown the last week of the year
- Plus More...
- "Management really tries to walk the talk. They are always looking for ways to make the company better. The work-life balance is the best I've experienced. Company lunches with birthday cakes, community service day, bring-your-dog-to-work day and frequent happy hours are some examples of the ways that this company tries to make this a great place to work." - Current Employee
- "Work/life balance, pay/bonus structure, senior leadership, management, new office location, mission, THE PEOPLE." - Current Employee
- "Just knowing that what I do each day has a positive impact on thousands of companies around the United States, makes it easy to wake up in the morning and jump into work mode! I have been supported 100% by senior management the entire four years. I expect to work here as long as I can, as learning and growing with this dynamic company rounds out my work-life balance." - Current Employee
The Senior Software Developer will work collaboratively with product managers, user experience designers and other developers in a fast-paced, agile environment to maintain, support, and improve core applications.
- Understand the “why” behind the business requirements
- Develop high-quality clean code
- Learn new skills and embrace growth opportunities - we expect this role to evolve over time to include more areas of responsibility
Successful candidates will have experience working in an Agile/SaaS environment as a senior member of the software development team.
- Object-oriented design skills
- C# (or related object-oriented programming experience)
- SQL (or related database experience)
- Microsoft Azure
- React, F#, jQuery, D3.js, GraphQL, .Net Core, database design, query tuning
- Android, iOS (preferred)
- Sponsorship is not being offered, now, or in the future for employment with this employer
Your application will be reviewed within 24-hours. If there's a match, a member of the IT Pros team will be in contact with you to coordinate a phone interview. You must submit your application to be considered - please no phone calls or third parties.
- Round 1 = Phone Interview with IT Pros (15 minutes)
- Round 2 = Phone Interview with the Hiring Manager (30-45 minutes)
- Round 3 = Take Home Coding Assessment (2-hours)
- Round 4 = Video Interview with the Hiring Team (3-4 hours)
- Round 5 = Predictive Index Assessment (5-minutes)
- Round 6 = Decision
Brought to You by IT PROS
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655880616.1/warc/CC-MAIN-20200706104839-20200706134839-00454.warc.gz
|
CC-MAIN-2020-29
| 2,961
| 35
|
http://www.advogato.org/person/salmoni/diary.html?start=480
|
code
|
24 Nov 2005
(updated 24 Nov 2005 at 23:43 UTC) »
Been having some thought about positioning of "context-dependent" menus (the ones that come up with a click of the right mouse button).
Constraint 1: they need to be visually close to the selection point to enable the user to infer that they are related (related to Gastalt theory: because it is close, users infer a relationship. Too far away, and it's context is inferred as belonging to something else);
Constraint 2: they should NOT overlay what the user is examining. Often, users need to look at what's underneath the menu.
These constraints are often exclusive. Doing one breaks the other - if the menu is close to "the action", it can easily block what has been selected. However, if it does not block the selection, it is too far away to be immediately (and obviously) related to the item.
How to get around this? The answer is actually quite simple. There needs to be some way to display the pop-up menu far away enough from the selection so as not to block it (and, ideally, not block other close items), and yet retain the visual connection with the selection. Here's where alpha blending can play a role: using something like a transparent connector (from furthest edge of selection to edge of menu for both edges in a plane) does not block the selection and displays to the user that the menu is related to that selection.
Theoretically, this means that the menu is shown retaining a visual connection to the selection, and yet does not require immediate visual proximity. It's not an ideal solution, but so much of HCI and usability requires compromise. It also breaks Fitts' law somewhat, but the time penalties incurred are minimal when compared to those incurred by loss of task.
Note: I am NOT talking about transparent menus: ever tried to read a text with another super-imposed? It's hard work and distracting and very bad usability and even worse if both need to be read.. Keep the two separate and yet connected, and there should be success.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119356.19/warc/CC-MAIN-20170423031159-00629-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 2,014
| 9
|
https://communities.sas.com/t5/SAS-Statistical-Procedures/Estimation-and-Calibration-parameter-values/td-p/386169?nobounce
|
code
|
08-07-2017 11:53 PM
Dear SAS experts,
I would like to know what is difference between estimation and calibration.
For example, for nonlinear regression model, we use data to estimate parameter values.
proc NLIN data=China_Q;
parms alpha=0.01 rho=0.01 beta1=0.01 beta2=0.01 beta3=0.01;
bounds alpha rho beta1 beta2 beta3 >= 0;
restrict beta1+beta2+beta3 = 0;
model Q = log (alpha) + rho*T + beta1*K + beta2*L + beta3*E;
In this case, this code is for estimation of parameter values in the nonlinear regressions model.
If I want to calibrate parameter values for this model, then I would like to know how to calibrate parameter values.
If you have any references/examples to explain the difference between estimation and calibration, please share them with me.
Thank you very much in advance.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805466.25/warc/CC-MAIN-20171119080836-20171119100836-00140.warc.gz
|
CC-MAIN-2017-47
| 790
| 13
|
https://releases.mangoapps.com/guide/software-fixes-in-this-release-6/
|
code
|
Software Fixes In 13.1 Release
This is the list of issues reported in earlier versions of MangoApps that have been fixed in this release
Web App Fixes:
- Active User Report: Last active column was missing in the active user report. This has now been added.
- Cancel Event Dialog: Ability to cancel the event from the event details dialog has been added.
- Posts Loading: Post module was leaving blank spots when you scroll down. Now the visible portion of the screen has posts displayed.
- Analytics Activities: The chat & messages activity was not getting plotted correctly. This has been fixed.
- User Profile: The profile fields & links are not responsive on small width device. This has been fixed.
- Integration Widgets: Clicking on items in the integration widgets like Mailchimp, Zendesk, Github etc was not opening the viewer. This has been fixed.
- News Feed: View all link went missing in Primary & Secondary tabs when Pinned tab was the first tab. This has been fixed.
- Content Statistics Report: Employee ID column was missing in the content statistics report. This has now been added.
- Invite User To Team: The confirmation dialog after the invite to a user in a team is successful was not being shown. This has been fixed.
- Reminder Pop Alignment: The reminder pop-ups were mis-aligned. This has been fixed.
- Multiple Selection Choices: For custom fields of type multiple choice (single selection, multiple selection) the default selection was incorrect. This has been fixed.
- Form Titles: Form titles with some characters was not showing up correctly. This has been fixed.
- News Feed Module Label: The news feed module label set by the domain admin was not getting reflected inside the team. This has been fixed.
- New Post Default Title: The new post default title now does not have the auto generated text. It instead just says “Enter the post title”.
- Office Locations Widget: The office location items on the widget were not clickable. This has been fixed.
- People Directory Scrolling: When filtering the people directory by location or department the pagination/scrolling did not work in some cases. This has been fixed.
- Job Title in Reports: Job title has been added to user activity and influencers report.
- Disabled Tracker/Form: When the tracker/form was disabled user was not shown an error when submitting via the form. This has been fixed.
- Registered Trademark Symbol: In some places in pages, posts, widgets the registered trade-mark symbol was not being shown correctly. This has been fixed.
- Duplicate Mention For User & Team: The issue of same mention being generated for a user & a team has been fixed.
- Download Folder: Downloading a folder in a public team by a non-member gave an error. This has been fixed.
- Email Address With Apostrophe: Inviting users with apostrophe character in the email address gave an error. This has been fixed.
Mobile App Fixes:
- Android App Launch Performance: Due to a software issue the launching of an android app on some devices was very slow. This has been fixed.
- Ideas links in Private Messages: When a idea link was shared in the private message, the link did not work. This has been fixed.
- Calendar: For events where RSVP is not enabled, the message you have not RSVP’ed was being shown. This has been fixed.
- File Sharing: When sharing a file with a colleague, the user look ahead search did not work. This has been fixed.
- All Teams Search: When searching all teams, results keep getting lost. This has been fixed.
- Landing Page in Team: The default landing page of a team was not being respected by the android app. This has been fixed.
- Ask a Question in Compose: In certain flows the ask a question option from compose disappears. This has been fixed.
- Embedded Form Issue: The form embedded in a post does not display on mobile. This has been fixed.
- CC of Team: On android adding a team to the CC list sometimes was throwing an error. This has been fixed.
- Shortcut Navigation Issue: On visiting the shortcut link in the primary navigation, the navigation / back option disappears. This has been fixed.
- Team Search Crash: Sometimes the app crashes on searching for teams. This has been fixed.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817036.4/warc/CC-MAIN-20240416000407-20240416030407-00305.warc.gz
|
CC-MAIN-2024-18
| 4,206
| 37
|
https://community.home-assistant.io/t/mqtt-from-homey-to-ha-home-assistant-discovery-doesnt-work-but-a-custom-protocol-doesnt-either/207378
|
code
|
I want my Homey to communicate with Home Assistant. I have a Broker on my Synology NAS. I have both a client and a hub app on my Homey.
There are two options within that hub.
- Home Assistant Discovery. I see all my devices in HA. Except for some of the fields. For a door/window sensor, the battery field is empty but for my temperature sensor both temperature and humidity are empty. That’s useless. I don’t know why or how…
- Normal mode (?), I have again two options.
2A. Homie Convention v3.0.1 which gives me a topic name of homie/homey-[ID]
2B. Custom, where I can set the topic myself (although it displays the same homie/homey-[ID]). Also the option: include class in topic, include zone in topic, normalise topics, and % and color format.
mqtt: broker: 192.168.2.21 discovery: true discovery_prefix: homie sensor: - platform: mqtt state_topic: "homie/homey-[ID]/temperatuur" name: "MQTT Test"
This entries in my configuration.yaml doesn’t show anything.
At the same time I’m not sure if the discovery_prefix and state_topic are valid. But here is my knowledge of MQTT to short
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710488.2/warc/CC-MAIN-20221128070816-20221128100816-00742.warc.gz
|
CC-MAIN-2022-49
| 1,096
| 9
|
https://www.the-next-tech.com/security/non-routable-ip-addresses/
|
code
|
At times when you indulge into the realm of internet and networking. Not often, but, indeed, you may have heard of routable IP addresses and non routable IP addresses.
You can think of these two as a type of network connection that each do have advantages and disadvantages.
In this blog, you will learn about non-routable ip addresses with its importance, pros and cons, role b/w IPv4 and IPv6, and differences.
Let’s get into the details.Also read: Get Rich Quick? 30 Best Money Making Apps To Turn Your Spare Time Into Cash
Non-routable IP addresses or private IP addresses are reserved numbers (contains four numbers separated by dots with range specified and incorporate RFC 1918 as structure) that computers use to communicate with each other within a private network.
Generally, home and office networks are built on top of this technology to ensure efficient device communication processes and seamless private network operations.
These IP addresses are hard to crack as it uses strong layers of protection and prohibits hackers from breaching your system.
Importance of non-routable IP addresses space:
The need for non-routable IP addresses is essential for various reasons. Be it securing local area networks or transferring sensitive information, thanks to private network addresses that grants such power.
Thus, switching from a routable IP address to a private one is beneficial. However, there are differences between both of them.Also read: Top 10 Trending Technologies You should know about it for Future Days
Glance at the differences between routable and non-routable IPs:
|Happen from one network to another.
|It is not possible.
|Transfers data from one network to another via router.
|Cannot use routers to transmit data over the network.
|Incorporates a network address and a device address.
|Only contains a device address.
|Used for large networks.
|Designed for local networks.
|Requires technical know-how to maintain.
|Less complex than routable IPs.
|Has direct internet access.
|Needs NAT for internet access.
|Can operate in different networks.
|Limited to the same network.
|Directly exposed to the internet and needs extra security measures.
|Out from the internet, therefore offers an additional layer of security.
Indeed, private network access has its own advantages and disadvantages. Following are advantages and disadvantages of non-routable addresses.
Advantages of Non-Routable Address Space:
High level security: Undoubtedly, enhanced security which grants secured communication over compute. As there is no involvement of the internet, making them less susceptible to external threats.
Network isolation: Another advantage is isolated networks, where devices can communicate within the private network. This feature is commonly used for internal company networks and home networks.
Good for testing and development: Non-routable addresses are useful for testing and development objectives. For instance; developers can simulate various scenarios without affecting external systems.
Reduced Network Traffic: By keeping internal communication within private address space, it reduces unnecessary traffic on the public internet, leading to more efficient data usage and potentially lower costs.
Disadvantages of Non-Routable Address Space:
Limited external communication: Primitively, devices on private networks can only communicate with the internet indirectly through a NAT or proxy.
Configuration complexity: In large ventures, managing private address spaces can be more complicated. It is because accurate configuration is required to ensure that ranges do not conflict with each other.
IP tracing is difficult: Another concern is end-to-end IP tracing. It is difficult to do because troubleshooting is not possible from a remote site.Also read: Top 10 Business Intelligence Tools of 2021
Request For Comment 1918 (RFC) is the responsibility of Internet Engineering Task Force (IETF) organisation for assigning private IP addresses on TCP/IP networks.
They have made networking standards for private IPs. Basically, these standards act as a reservation for non routable IP addresses.
Here’s the RFC1918 standardised for private address space:
Ironically, these specified standards are only allotted to the enterprises authorised for internal use only.
VPNs (Virtual Private Networks) use non-routable IP addresses as part of their architecture to create secure and private connections between users and the VPN server.
Here’s how VPNs use non-routable IP addresses:
When you connect to a VPN, your device becomes part of the VPN’s internal network.
The VPN server assigns your device a non-routable IP address from a reserved private address space (e.g., 10.0.0.0/8, 192.168.0.0/16 in IPv4, or fc00::/7 in IPv6).
To share data securely, VPN sends the data packets into a secure tunnel.
In addition to tunnelling, VPNs employ encryption to protect the data within the tunnel.
The data is encrypted using encryption algorithms.
Routers on the internet route the encrypted packets based on the public IP address of the VPN server, and they are delivered to the VPN server’s location.
Upon reaching the VPN server, the data is decrypted, and the outer headers are removed, revealing the original data packets.
The VPN server then sends these decrypted packets to their intended destination on the internet.
The VPN server receives the response, encrypts it, and encapsulates it in a tunnel directed to your device’s non-routable IP address.
At last, the client receives the encrypted data, decrypts it, and delivers the original response to your application.
VPNs assign your device a non-routable IP address from a private range when you connect to them. They create a secure “tunnel” around your data while incubating non-routable IP addresses as the source.
Next, our data is encrypted and protected while travelling over the public internet. The VPN server decrypts the data and sends it to its intended destination using its own public IP address. When the response comes back, it goes through the same process in reverse, ensuring secure and private communication.Also read: Top 7 Work Operating Systems of 2021
The non-routable IP addresses are specifically designed for internal communication or transfer of data. This means, reserved for use only within private/corporate network.
No, IPv4 as a whole is not non-routable.
No, IPv6 is not non-routable. Similar to IPv4, IPv6 is designed to be routable.
IPv4 Non-Routable IP Address example - 10.0.0.0 - 10.255.255.255 (10.0.0.0/8) and IPv6 example - fc00::/7 (starting with fd).
Thursday November 23, 2023
Monday November 20, 2023
Monday October 2, 2023
Wednesday September 20, 2023
Wednesday September 20, 2023
Friday September 15, 2023
Monday July 24, 2023
Friday July 14, 2023
Friday May 12, 2023
Tuesday March 7, 2023
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476413.82/warc/CC-MAIN-20240304033910-20240304063910-00868.warc.gz
|
CC-MAIN-2024-10
| 6,837
| 69
|
https://frivgames.io/tricky-rick.html
|
code
|
Tricky Rick Game
Tricky Rick Game is a online game full of adventure and fun. You can have a excellent time played Tricky Rick Game at Frivgames.io
How to Play Tricky Rick Game?
Help Rick to collect all the stolen fuel to refuel his spaceship and fly away from the planet. Use hammer, bombs, jetpack and other useful stuff to solve puzzles!
Tricky Rick Game Which Keyboard Keys to Play With?
WASD \ Arrow Keys – move;S \ Down Arrow – takeelease an object;CNTRL – interaction with objects: throw, hammer strike, invisibility mode;SPACE – interaction with elevators and fuel stations;Esc \ P – pause;
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669352.5/warc/CC-MAIN-20191117215823-20191118003823-00454.warc.gz
|
CC-MAIN-2019-47
| 608
| 6
|
http://www.devshed.com/c/a/JavaScript/Controlling-Browser-Properties-with-JavaScript/4/
|
code
|
It's interesting to note, also, that the manner in which you refer to windows changes depending on where you are when you do the referring. In order to close the current window, for example, you can always use the following:
However, you can also affect other windows, simply by replacing the generic name "window" with the actual name of the window. For example, let's suppose you want to close a child window (previously assigned the name "baby") from a parent window.
Let's look at a simple example to see how this works. Here, the primary window consists of a menu containing links, each of which open up in a child window named "display". The child window can be closed either from the parent window, by clicking on the "Close Display Window" link, or from the child window itself, by clicking the "Close Me" link. Here's the code for the menu:
Notice that I have prefixed the call to close() with the child window name.
Within the pages loaded, there exists a "Close Me" link as well, which can be used to close the child window directly. Here's what one of the pages loaded into the child window might look like:
In this case, since I'm closing the current window, I can use window.close() directly without worrying about the window name.
Thus far, I've shown you how to control the child window from the parent. It's also possible to work the other way around, controlling the parent window from the child. Every Window object exposes an "opener" property, which contains a reference to the window that created it. Therefore, even if you don't know the name of the parent window, it's still possible to access and manipulate it via this "opener" property.
Let's take a look at a simple example, resizing the parent window from the child:
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00268-ip-10-147-4-33.ec2.internal.warc.gz
|
CC-MAIN-2014-15
| 1,745
| 8
|
http://tomwilliamson.com/wpos_portfolio/identity-management-system/
|
code
|
Roles: Design, Consultant, and Developer
This software is the gatekeeper into systems for franchises and in-house software users and is a homegrown version of the Think Identity server. When I got the MVC ASP.Net application, it was unusable and page refresh times were taking upwards of 5 minutes.
The team and I reworked the Repository Pattern, introduced a new ORM, optimize Oracle PL/SQL queries according to execution plans, optimized API calls, and implemented caching, which brought page refresh times down to between 1-5 seconds. We also performed various UI upgrades to the applications.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247489282.7/warc/CC-MAIN-20190219000551-20190219022551-00456.warc.gz
|
CC-MAIN-2019-09
| 596
| 3
|
https://msdn.microsoft.com/en-US/library/bb886939(v=bts.10).aspx
|
code
|
Microsoft® BizTalk 2009 Accelerator for RosettaNet (BTARN)uses the following glossary terms.
- application adapter
An application that implements the application adapter interface. The notification mechanism on the acceptance of an incoming action message (request or response) invokes the application adapter. It implements two methods: BeginNotify and EndNotify. The public responder invokes the BeginNotify method, whereas the out-of-the-box private responder invokes the EndNotify method. The call to the Notify method means that the message was successfully saved into the MessagesToLOB table.
- action URL
The partner URL to which the home organization transmits an action message during an asynchronous process, for example, http://FabrikamServer/BTARNApp/RNIFReceive.aspx.
- BizTalk Accelerator for RosettaNet
An add-on product to Microsoft® BizTalk Server 2009 that helps organizations to build RosettaNet Implementation Framework (RNIF)-compliant solutions.
See also: BizTalk 2009 Accelerator for RosettaNet (BTARN) Administration
- BTARN 2009 Administration
A Microsoft BTARN 2009 application that lets you describe process templates and manage partner agreements.
- BizTalk Editor
A tool with which you can create, edit, and manage specifications. With BizTalk Editor you can create a specification based on a specification template, an existing schema, certain types of document instances, or a blank specification.
- BizTalk Orchestration Designer
A design tool you can use to create drawings that describe long running, loosely coupled, executable business processes. The XLANG schedule drawing is compiled in an XLANG schedule that you use to run the automated business process.
- BizTalk Server 2009
A Microsoft product for business-process automation and application integration both in and between businesses. BizTalk Server 2009 provides a powerful Web-based development and execution environment that integrates loosely coupled, long-running business processes, both in and between businesses.
BizTalk Server 2009 features include the composition of new and existing XLANG schedules; integration among existing applications; the definition of document specifications and specification transformations; and the monitoring and logging of run-time activity.
The server provides a standard gateway for sending and receiving documents across the Internet, and provides a range of services that help to ensure data integrity, delivery, security, and support for the BizTalk Framework and other key document formats.
A utility to clean BTARN artifacts off a computer.
- business action
A RosettaNet message that contains business content such as a purchase order request or a request for a quote. Together with business signals, these actions make up the necessary elements to complete the business activity specified by a particular Partner Interface Process (PIP).
See also: business signal, Partner Interface Process (PIP)
- Business Activity Monitoring (BAM)
A BizTalk Server feature that gives business users a real-time view of their heterogeneous business processes. This enables them to make important business decisions.
- business signal
A RosettaNet message, such as a ReceiptAcknowledgement or Exception, exchanged between two RosettaNet network applications to communicate certain events in the execution of a PIP instance. Together with business actions, these signals make up the necessary elements to complete the business activity specified by a particular PIP.
See also: business action
A utility to import a signing or encryption certificate from a .pfx (.p12) or .cer (.der) file into a private or public store for use with BTARN. A .pfx file, also known as Personal Information Exchange–PKCS #12, is typically protected by a password as it holds a private key that is used for decryption or signing. A .cer file (certificate file) holds the public key for encryption and validation of the signature.
- Chemical Data Exchange (CIDX) Chem eStandards
Uniform standards of data exchange developed specifically for the buying, selling, and delivery of chemicals. These standards are based on the universally recognized standards for electronic data exchange—XML. BTARN supports CIDX Chem eStandards.
A group of high-level business processes, such as order management, inventory management, or service and support. The clusters addressed by RosettaNet represent core business processes for the supply chain industry.
- Data Universal Numbering System (D-U-N-S) number
A sequentially generated nine-digit number that uniquely identifies business locations, and is global in scope.
- delivery header
A part of a RosettaNet message. The delivery header is an XML document that identifies the message sender, the recipient, and message instance information.
See also: preamble header, service header, service content, RosettaNet message
- destination organization
An organization that has been designated in a messaging port as the destination for documents.
See also: organization, my organization
- digital algorithms
An algorithm that takes a message as input and produces a hash or digest of it, a fixed-length set of bits that depend on the message contents in some highly complex manner. Design criteria include making it extremely difficult for anyone to counterfeit a digest or to change a message without changing its digest. Applications that typically use digest algorithms are in message authentication and digital signature schemes. Widely used algorithms include MD5 and SHA1. BTARN supports both MD5 and SHA1 for incoming messages, and only SHA1 for outgoing messages.
- document definition
A set of properties that represents a specific document. Document definition properties include a pointer to a document specification and can include global tracking fields and selection criteria.
- document instance
A representation of the actual data that is sent to BizTalk Server. A document instance differs from a document specification in that the specification defines the structure of the data, while a document instance is a representation of the specific data that is contained in a structure.
- document type definition (DTD)
A standard definition that specifies which elements and attributes might be present in other elements and attributes and that specifies any constraints on their ordering, frequency, and content.
- double action transaction
A process where an initiator sends a request action, receives a signal, followed by a response action from the responder. The initiator finishes the process by sending a signal to the response action.
- Extensible Markup Language (XML)
A specification developed by the World Wide Web Consortium (W3C) that enables designers to create customized tags beyond the capabilities of standard HTML. While HTML uses only predefined tags to describe elements in the page, XML enables tags to be defined by the developer of the page. Tags for virtually any data item, such as a product or an amount due, can be used for specific applications. This enables Web pages to function as database records.
See also: map
- Extensible Stylesheet Language (XSL)
A style sheet format for XML documents. XSL is used to define the display of XML just like cascading style sheets (CSS) are used to define the display of HTML. BizTalk Server uses XSL as the translation language between two specifications.
- Line of Business (LOB) Application
The application that communicates with BTARN as the backend system.
- Loopback utility
A utility for developers to automatically generate a loopback agreement that is a mirror copy of a home-to-partner agreement. This lets you perform home-to-partner and partner-to-home message exchanges on a single computer.
An XML file that defines the correspondence between the records and fields in one specification and the records and fields in another specification. A map contains an XSL style sheet that BizTalk Server uses to perform the transformation described in the map. Maps are created in BizTalk Mapper.
See also: Extensible Markup Language (XML)
- my organization
Represents your organization in a trading partner agreement.
See also: organization, trading partner, destination organization
- Multi-Purpose Internet Mail Extensions (MIME)
An extension of the Internet e-mail protocol that lets you use the protocol to exchange different kinds of data files on the Internet: audio, video, images, and application programs.
A way to make sure that the sender of a message cannot later refuse to recognize that the sender sent the message and that the recipient cannot deny having received the message. Non-repudiation of an incoming message requires that the message be saved by the receiver, and that the message should carry a digital signature using the signing certificate of the sender to ensure its authenticity. Non-repudiation of an outgoing message requires saving the acknowledgement message (incoming message from the recipient of the first message), and that the message should carry the digital signature of the digest of the original message using the signing certificate of the recipient.
See also: digest algorithms
An RNIF 1.1 process type where the initiator notifies the responder with a single message. The responder is expected to reply with a business signal as an acknowledgement.
See also: transaction, business signal, initiator, responder
- Notification of Failure (PIP 0A1)
A special PIP that indicates unexpected process failures. The initiator or the responder can initiate a Notification of Failure. It refers to an existing or previously exchanged process. Upon receipt of a 0A1, the receiving party makes sure that the referenced process is considered not valid.
The process of converting a RosettaNet message in its XML representation and vice versa.
- Partner Interface Process (PIP)
A PIP describes a set of business documents and agreement details, including document content details.
See also: RosettaNet Implementation Framework (RNIF)
- PIP Specification document
A document that contains guidance about the settings to use when you create a process configuration in the BTARN Management Console. You download the PIP Specification document and the PIP from the RosettaNet organization, from RosettaNet.org. A document that contains guidance about the settings to use when you create a process configuration in the BTARN Management Console. You download the PIP Specification document and the PIP from the RosettaNet organization, from RosettaNet.org.
- preamble header
An XML node that identifies the name and version of the standard with which a business message is compliant. It is packaged together with other headers to form a complete RosettaNet Message. Also named preamble.
See also: RosettaNet message, service header, delivery header, service content.
- private process
Business processes that are internal to an organization. BTARN implements private processes as long-running BizTalk orchestrations.
- Process Configuration Setting (PCS) profile
Determines how a partner agreement runs. You use the PCS profile to enter the configuration details of a RosettaNet Partner Interface Process (PIP). All configuration values specified in a RosettaNet PIP specification map to one element in the PCS profile. You can use one PCS profile for multiple partner agreements.
A named location that uses a specific implementation. In BizTalk Orchestration Designer, a port is defined by the location to which messages are sent or from which messages are received, and the technology that is used to implement the communication action. The location is uniquely identified by the name of the port.
- public process
Business processes that involve integration with trading partners as public processes. BTARN implements public processes as long-running BizTalk orchestrations. One public-process orchestration runs on the initiator side and one on the responder side. The BTARN Setup program provides versions of the initiator and responder public-process orchestrations for both RNIF 1.1 and RNIF 2.01. These public-process orchestrations implement all RNIF processes. Public processes hide the complexity of RNIF from the rest of the components. Besides enforcing the RNIF-compliant message flow, the public process also determines default-tracking settings and provides process state information at runtime.
The role of an organization in a transaction or notification process that responds to a request by a trading partner.
See also: initiator
- RosettaNet Implementation Framework (RNIF)
A standards framework that provides implementation guidelines for those companies that want to create interoperable software application components that run PIPs.
See also: Partner Interface Process (PIP)
- RosettaNet message
The logical grouping of the preamble header, delivery header (in the case of RNIF 2.0), service header, and service content.
See also: preamble header, service header, delivery header, service content
- RosettaNet object
A RosettaNet message enveloped for delivery in RosettaNet Implementation Framework version 1.1.
The definition of the structure of an XML file. A schema contains property information as it pertains to the records and fields in the structure.
- service content
The primary component of the RosettaNet message. It is an XML node that represents the business content specified by a particular PIP.
See also: RosettaNet message, preamble header, service header, delivery header
- service header
An XML document that identifies the parts associated with a business message, including the PIP, business activity and action, sending and receiving services, trading partners, and roles.
See also: preamble header, delivery header, RosettaNet message, service content
- signal URL
The URL to which the home organization transmits a signal message. For example, http://FabrikamServer/BTARNApp/RNIFReceive.aspx.
- single action notification
A process where the initiator sends a single action message and the responder replies with a message.
A BizTalk Server-specific XML schema. You create specifications in BizTalk Editor and can be based on industry standards, such as EDIFACT, X12, and XML, or on flat files, such as delimited, positional, or delimited and positional. BizTalk Mapper uses specifications, opened as source specifications and destination specifications, to create maps.
- sync URL
The URL that the home organization uses to establish synchronous transactions with the partner, for example, http://FabikamServer/BTARNApp/RNIFReceive.aspx.
- synchronous transaction
A process where the initiator returns a response (double-action) or signal (single-action) on the same HTTP state without closing the connection.
- trading partner
An external organization with which your organization exchanges electronic data.
See also: organization, my organization
- trading partner agreement
An agreement between your organization and your trading partner. A trading partner agreement references a Process Configuration Setting profile, home organization, partner, and contains agreement specific configuration settings.
An RNIF 1.1 process where the initiator sends a RosettaNet message, receives a signal, receives a RosettaNet message as an answer, and sends a signal for acknowledgement.
See also: notification
- validation adapter
An application that implements the Validation Adapter interface. The public responder invokes the Validation Adapter when it receives an action message (request or response). It can include any set of validation rules that a business may require before accepting an incoming message. BTARN natively performs validation in the receive pipeline, and in public orchestrations.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187519.8/warc/CC-MAIN-20170322212947-00204-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 15,698
| 122
|
https://forum.step.esa.int/t/sentinel-1-data-preprocessing/11576
|
code
|
While calibrating the sentinel 1 data, why there comes black screen after sigma 0?
you mean after the calibration? You can use the image statistics tool to see if the raster is empty or not. Sometimes, only the contrast needs to be adjusted in the color manipulation tab.
I am doing the pre-processing of Sentinel 1 SLC data in SNAP.
While calibrating the VV POLARISED intensity, I am just getting the blank screen and not getting any image. And this is happening only for IW1,IW2,IW3 VV polarization channel.
For HV Polarization I am getting the output.
So, please solve this problem I am not able to rectify it, I have searched it a lot.
what about my question on the contrast? Did you check if the raster has values?
How to check the contrast?
I did not find color manipulation tab?
in the color manipulation tab you see if the raster is empty or not.
View > Tool Windows > Color Manipulation
Before multilooking, the image is coming but when after multilooking, calibration(radiometric) is done it is not showing anything.
then it is not blank, only the value range is too large for the greyscale.
click on the icon to stretch the colours over a reasonable range.
Not coming anything after clicking on 95% icon.
have you seen this range?
If it was empty, I would expect min0 max0. I have no Idea what is wrong then.
do I need to calibrate first and then do multilooking for VV polarization?
But for VH polarization also I have done the calibration later after performing the multilooking, then I am not getting any 0 values.
yes, calibration before multilooking. The polarization doesn not matter here.
Please use the edit button to add things to your posts instead of making new ones. This keeps the discussion a bit cleaner.
yes, when I have done basic:
the minimum value is 0 and max is also 0.
The display range.
I was looking in sliders there was the range which have told u earlier.
So now what to do if min and max value is 0?
I have no explanation for this, sorry.
What is the name of your product?
It is SLC Sentinel 1 data.
I meant the full product name so I can find it online and test it myself.
I’m new here.
I have a similar issue where I’m working on Python (on an Azure VM) using snappy to pre-process the GRD tile of Sentinel-1. However I’m stuck when trying to view the pre-processed image on Jupyter notebook.
I tried downloading the .dim file to my system to view it on SNAP, I get a blank screen. Do I have to download the .img file (which is 2.5 GB) too?
yes the BEAM DIMAP format consists of both the dim file and the data folder. Both are required in this structure to be opened in SNAP.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711221.94/warc/CC-MAIN-20221207221727-20221208011727-00114.warc.gz
|
CC-MAIN-2022-49
| 2,621
| 34
|
https://aurasma.zendesk.com/hc/en-us/articles/360024116172-Transparency-map-alpha-
|
code
|
In order to use transparency in HP Reveal you need to store the alpha information in the alpha channel of the png - when applying the png as a diffuse colour Maya or 3DS Max will automatically connect the alpha channel of the png to the alpha in the shader.
It is worth noting that transparency can be a little bit temperamental in the HP Reveal 3D engine, so we recommend that you use it with caution; too much transparency applied to different objects in the same scene can cause rendering sorting issues, and end up rendering unwanted effects and artefacts.
Why would you use transparency?
We can use transparency for different reasons, however two of the main ones would be:
1. Define the amount of visibility, translucency and or thickness of an object
- Effect is used in glass and or thin materials
- We recommend trying to use opaque materials when possible
2. Define the silhouette of an object without the need for geometry
- Usually used for foliage, and other geometry that has many duplicates in the scene
- We recommend cutting the geometry when possible even if that would mean increasing the poly-count as long as it is not a massive increase
In order to understand more about transparency please read the following article at wiki.polycount.com where transparency is explained in depth.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313889.29/warc/CC-MAIN-20190818124516-20190818150516-00046.warc.gz
|
CC-MAIN-2019-35
| 1,303
| 11
|
http://stackoverflow.com/questions/18226855/reading-accessing-to-app-settings-value-issue
|
code
|
Hello I'm trying to access this setting:
with following code:
string path = ConfigurationManager.AppSettings["swPath"].ToString(); StreamReader sr = new StreamReader(File.Open(path,FileMode.Open));
But I get following exception:
Object reference not set to an instance of an object.
May I ask where do I make a mistake? Thank you so much for your time.
Update issue for Ehsan Ullah:
Properties.Settings.Default.swPath = cestasouboru.Text; Properties.Settings.Default.Save();
I think this isn't that helpful for you but how can I provide more helpful information?
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883858.16/warc/CC-MAIN-20140722025803-00120-ip-10-33-131-23.ec2.internal.warc.gz
|
CC-MAIN-2014-23
| 562
| 9
|
http://bitwaretech.com/download-powerpoint-2007-crack.html
|
code
|
No fumbling with settings and equipment. The new graphical and visual annotations are a standout feature of the software as the Excel gets a new formatting style for the cells of the worksheets. It is going to hack as well as create an operating merchandise key for Office 2007. For instance, the picture and chart formatting tools may not be visible or active until the user highlights the image or a chart. Additionally, it gives a digital environment.
After every few years a new and more effective version of Office surfaces on the scene and Microsoft Office 2007 is also here for innovation. Different formats are embedded into this Excel 2007 that makes charts, the calculation to be simplified. You can see users feedback in comment section!! Features of Microsoft Office 2007 Full Crack Microsoft Office 2007 contains virtually all the features that are common to the Microsoft Office family. When you place any control on this discipline, the program will indicate as you type. With PowerPoint Slide Libraries, you can easily repurpose slides from existing presentations stored on a site supported by Microsoft Office SharePoint Server 2007.
Microsoft Office 2007 Crack is filled with many advantages. You now have all of the rich features and capabilities of PowerPoint in a streamlined, uncluttered workspace that minimizes distraction and helps you achieve the results you want more quickly and easily. This Ribbon-based application is generally accepted by virtually all users in the world. It contains animations tools, transition tools, different slides format, tools for designing and drawing, amongst others. Moreover, it includes business tackle like the Outlook 2007, Writer 2007, and Access 2007. You are able to download the process in the links given below for totally free.
The Access is a tool that ensures the professional databases programmers, so the software is not changed much as before. Instead of wasting time on the use of a calculator, you can insert all the data into the Microsoft Excel and get the total result with ease. Scroll down to see instructions and system requirements. Microsoft Office Servers 2007 Microsoft Office 2007 dramatically expands the area of Microsoft Servers. The Microsoft Office Professional 2007 suite includes the basic programs Word 2007, Stand out 2007, and PowerPoint 2007.
Using the PowerPoint application, the user can be able to come up with more appealing presentations. But these presenting world-wide by efficient and fast. Zoom into points you want to emphasize. It gives a user the opportunity of making their mathematical and statistical calculations to be more accurate and easier. The Ribbon-based software has been widely made welcome by the users. Download Microsoft Office Enterprise 2007 Full Version Cracked, Office 2007 Crack, Office 2007 Serial Key, Office 2007 Product Key, Office 2007 Activation Key.
The ribbon provides quick access to some features that in the previous versions were hard to find since they were hidden in the complex drop-down menus. You can incorporate these versions into Microsoft Office 2007 with affecting or changing any settings. Because all sort of documentation kind records your individual style using my position of work 2007 full version. These Keys are generated by Microsoft Office 2007 Product Key Generator. The Microsoft Office system has evolved.
New and useful set of templates are also added in Office 2007 that includes project tracking tools and technical support tickets for the support of the new users. Also, it offers business tools the same as the Outlook 2007, Writer 2007, and Access 2007. The Microsoft Enterprise including the type of workplace programs which are utilized in our everyday organization and area artwork. Manage the whole business with Microsoft firm Office Professional 2007. New and useful set of layouts have also added in Office 2007 that includes project tracking tools and technical support tickets for the backing of the new users. The Ribbon centered interface allows you to have almost instant access to different features of Office the year of 2007.
Simply duplicate the slides you want morphed together, move the objects based on how you want them animated, and click Morph. Everyone can see who replied to whom. The Microsoft Enterprise like the Office type programs used in the company of yours which is every single day as well as artwork. This version also has a standout graphical feature and a new visual observation feature. As a result, the user can put in pictures and fashions inside their credentials to make sure they are more beautiful.
This feature gives the user the permission of displaying things like themes, pictures format before applying it permanently. The user can have access to presentations on the server and also have their performances updated to match those of the server. The visual clarity of this software makes it very easy to use and leaves all the earlier versions far behind in terms of competency. Letter format, article format, a textbook format and some other common formats. I will supply you with the small description of those. Save time and stay organized with this familiar,. With its totally revamped software, the productivity has recently been boosted.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524502.23/warc/CC-MAIN-20190716035206-20190716061206-00349.warc.gz
|
CC-MAIN-2019-30
| 5,263
| 9
|
https://veenga.wordpress.com/tag/microsoft/
|
code
|
Dropbox has announced that it is partnering with Microsoft to let MS Office users access its service from within the Office apps, and allowing its users to edit Office files from their Dropbox mobile app. Users will be able to sync changes across devices and any edit they make will automatically be saved to their Dropbox account. The files created using this integration of Office and Dropbox could be stored and shared via Dropbox links like any other Dropbox files. As of now, this integration is only applicable to MS Office for Desktop, though according to the official announcement, Dropbox for Business customers with Office 365 licenses will soon be able to take advantage of these new features.
The announcement, made in the form of a blog post over here, does not mention anything about the financials related to this partnership, or which one of the companies approached the other.
Microsoft has decided to collaborate with Docker Inc. by providing Docker with support for new container technologies that will be delivered in a future release of Windows Server. Docker Engine will be compatible with the next release of Windows Server, and Docker will provide Docker Engine images for Windows Server in the community-driven Docker Hub. According to the public release related to this announcement, developers and organizations that want to create container applications using Docker will be able to use either Windows Server or Linux with the same growing Docker ecosystem of users, applications and tools.
Under this new partnership, Docker Hub will be integrated into Microsoft Azure directly through the Azure Management Portal and Azure Gallery, and the developers will be able to directly work with a pre-configured Docker Engine in Azure to create a multicontainer Dockerized application.
Microsoft has started removing the file-size limit it earlier placed on user files for its OneDrive account holders. What? You still can’t save individual files more than 2 GB in size? That’s okay, there’s nothing wrong with your computer or Internet. It’s just that the geniuses at Microsoft haven’t yet removed the file-size limit for every OneDrive account. Maybe they’re just testing how their hardware would react once they globally remove the size barrier. Nevertheless, it’s pretty cool that they are finally taking an ‘initiative’ like this, albeit a number of cloud storage and file hosting providers already provide their users with ‘file-size limit free’ service.
As mentioned before, there used to (for some people, still is) be a limit of 2GB on the size of individual files that could be uploaded/saved on Microsoft’s cloud storage service OneDrive.
I’ve never been a fan of MSN Messenger, but I don’t hate it either. It was one of the first IM clients which pioneered the era of social networking on the Internet, and when such an application is put to sleep, you can’t help but feel a little sad. MSN Messenger, which Microsoft started calling Windows Live Messenger sometime in 2005, was already abandoned by Microsoft outside People’s Republic of China, and now it will be brutally executed by the Redmond, Washington based company on 31 October. Many PRC users have received an email by Microsoft notifying them of the discontinuation of the service and advising them to make a shift to Skype.
Despite its association with one of the most ‘not-loved’ companies in the world, MSN Messenger did manage to win the hearts of its users during its lifetime.
If truth be told, I believed Microsoft had stopped being its evil former self, but apparently, there’s no coming back from the ‘zone’. Once you are there, you are there forever. If a report from a Chilean magazine is to be believed, Microsoft lobbied against the government use of free software and promoted proprietary software with the help of a Chilean Member of Parliament. The new bill has been proposed after a bill which promoted the government use of Free and Open Source Software was met with enthusiasm by the entire parliament, except from the alleged Microsoft’s lackey.
Microsoft’s ‘brother in arms’ is one Jorge Daniel Farcas Insunza who proposed a bill which promotes the use of proprietary software and talks about tax concessions to the firms who use proprietary software. The tax concessions on proprietary software are not a new thing, many countries provide that.
After having a long run with the most dominant force in the Operating Systems market, Steve Ballmer is leaving the board of Directors of Microsoft, which essentially ends his relationship with the company. The ex-CEO of Microsoft was with the company for more than three decades and has played various roles during his long stay. Ballmer had been associated with Microsoft ever since its beginning, but recently has been spending most of his time with the Clippers, in civic contribution, teaching and studying. He sent his ‘departure letter‘ to Satya Nadella, the current Chief Executive Officer of Microsoft, yesterday.
I have never been a Microsoft fan, or of the people associated with Microsoft, but I still feel a little awkward about Ballmer’s departure. The guy was one of the originals and a reminder to Microsoft about their ‘humble’ beginnings. Regardless of my personal opinion, he will be missed.
We have sad news for all of you. Microsoft is dropping support for older versions of the world’s most widely used web browser for downloading other web browsers. You can use your beloved web browser’s older versions but Microsoft won’t release any security patches or provide technical support for those in the future. All you can do now is to update the best browser ever to be developed to its latest version. We are sad, almost on the verge of tears because Microsoft is abandoning our favorite web browser’s ancestors.
Too much? Ah, c’mon! Alright, let’s get back to the news. They are dropping support for older versions as those versions are not ‘optimized’ for the cloud and other services Microsoft provides. If you use Internet Explorer (we pity you), you are recommended by MS to update your IE to the latest version.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828507.57/warc/CC-MAIN-20181217113255-20181217135255-00421.warc.gz
|
CC-MAIN-2018-51
| 6,175
| 14
|
http://edwardsamuels.com/freefermemirror/freeferme/password.htm
|
code
|
To access freeferme.com, you must submit a proper User
ID and Password:
If you don't have a password, you should check out the article by Professor Edward Samuels, at New York Law School, describing this
site. (He gives away the ID and password!) Check it out at www.edwardsamuels.com/copyright/beyond/articles/freeferme.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644915.48/warc/CC-MAIN-20230530000715-20230530030715-00161.warc.gz
|
CC-MAIN-2023-23
| 321
| 4
|
https://schoolleaders.thekeysupport.com/curriculum-and-learning/assessment-non-specific/effective-questioning-techniques/?marker=sub-sub-topic
|
code
|
The guidance in the first 2 sections below comes from teacher Andy McHugh in his guest article for Sec Ed on effective classroom questioning strategies.
Plan your questions carefully in advance
Start by deciding what your pupils need to know by the end of a topic and what skills you want them to be able to demonstrate.
For example, if you're teaching about landscapes in art, your pupils will need to know about colour theory, painting techniques and using different tools.
Choose questions that will build on the knowledge they learn and the skills they develop in response to the previous question, e.g. 'what colours do we usually see in a landscape painting?', 'how do we get the colour green?' and 'is green a primary colour?'.
For a more comprehensive example, see Andy's series of questions on religious experience in the article linked above.
Start with questions that
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474784.33/warc/CC-MAIN-20240229035411-20240229065411-00814.warc.gz
|
CC-MAIN-2024-10
| 878
| 7
|
https://community.powerbi.com/t5/Issues/connecArcGIS-Enterprise-Unable-to-Sign-in/idi-p/1440332
|
code
|
I am trying to connect to ESRI Enterprise maps using ArcGIS connector, it gives me below error. I have checked setting at Power BI admin to enable ArcGIS Maps in Admin portal, on the desktop app. It enabled everywhere.
You may take a look at https://doc.arcgis.com/en/maps-for-powerbi/get-started/sign-in-to-arcgis.htm.
Hi @v-chuncz-msft ,
Already done that, it's not helping out.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141163411.0/warc/CC-MAIN-20201123153826-20201123183826-00527.warc.gz
|
CC-MAIN-2020-50
| 380
| 4
|
https://ap-gto.groups.io/g/main/message/81054
|
code
|
I am looking for some clarifications regarding the interactions / interoperability of the Ortho Model performed by the Keypad versus the pointing model created by APCC? The questions I want to understand are:
For some observing and simple single object imaging sessions, I think I would like to just have the orthogonality error corrected without creating / running a normal pointing model. It would simplify / speed up plate solving on both sides of the meridian. Ideally, for a quick portable imaging session, I would prefer not to have to rerun the Keypad Ortho Model at the beginning of each session when I don't want to run APPM. I will be guiding for these sessions and since they involve only a single target, plate solving will suffice for the evening's only target.John
- I assume the APCC model takes precedence when a Keypad Ortho Model has been built. Is that correct?
- Does the APCC model erase the results of the Keypad Ortho Model or simply override it?
- If the APCC pointing corrections are turned off, does the most recent Keypad Ortho Model once again take effect or does a new one need to be built?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304515.74/warc/CC-MAIN-20220124054039-20220124084039-00185.warc.gz
|
CC-MAIN-2022-05
| 1,119
| 5
|
https://www.opencomedy.com/yasminsaoirse
|
code
|
I very recently have started doing stand up comedy after several years of thinking about it.
I have a background in performance, tour guiding and fashion, so this seemed the next logical step.
I am actively looking for opportunities, I am london based.
I also run a cabaret called Shady Mushrooms Spectacular @shadymushroom on Instagram.
This user has not added any events
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400205950.35/warc/CC-MAIN-20200922094539-20200922124539-00515.warc.gz
|
CC-MAIN-2020-40
| 372
| 5
|
https://lists.debian.org/debian-user/2020/06/msg00236.html
|
code
|
On 6/7/20 8:57 PM, Nicolas George wrote:
We have to acknowledge: there are no Libre Software solutions for videoconferencing.
Having said all that, the instructions to get BBB going seems solid. Perhaps someone here with a bit of knowhow will do this and then put a guide here? That would be very, very nice: Here's my contribution: https://docs.bigbluebutton.org/2.2/install.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706291.88/warc/CC-MAIN-20221126112341-20221126142341-00168.warc.gz
|
CC-MAIN-2022-49
| 380
| 3
|
https://dev.adzerk.com/docs/optimization
|
code
|
Auctions are a delivery method with the goal of generating the most revenue for each ad impression. Effectively, rather than picking an ad randomly via a lottery system, it calculates the expected revenue (eCPM) you'll make for each ad and selects the ad that'll deliver the highest return.
It's used when:
- You have multiple advertisers with different bids
- You've set the expectation that bid amount will impact volume (versus guaranteeing a certain number of impressions/clicks)
- You are charging on CPC or CPA (versus straight CPM or flat fee)
- Your inventory is competing against RTB inventory
Adzerk uses effective Cost-Per-Mille (thousand impressions), or eCPM, to calculate the value of each ad in auction. It's the historical revenue of an ad per 1000 impressions:
eCPM = (Total Revenue / Impressions) * 1000 Ex: $5 eCPM = ($45 revenue / 9000 impressions) * 1000
If your flights use a CPM rate, the eCPM is the same as CPM because each impression generates the same amount of revenue. However, if you are charging on a cost-per-click (CPC) or cost-per-action (CPA) basis, revenue is not derived from impressions, so eCPM has to be calculated using click-through-rate and/or conversion-rate:
Given: CPC: $5.00 Impressions = 9000 Clicks = 420 We will use clicks to calculate the total revenue: Revenue = CPC * Clicks = 5.00 * 420 = $2100.00 Then we will use total revenue to calculate eCPM: eCPM = (Revenue / Impressions) * 1000 = (2100.00 / 9000) * 1000) = $0.2330
Normalizing revenue into eCPM is useful for two auction situations:
- It enables CPC and CPA flights to compete against each other and against CPM flights
- It allows CPC and CPA flights that generate the most revenue to win the auction (instead of competing using their price).
Just because an advertiser is paying a higher CPC doesn't mean they'll generate the most revenue. In the scenario below, the $1.00 CPC bidder actually makes 150% more than the $2.00 bidder because their click-through-rate (CTR) is so much higher:
CTR (per 1000 impressions)
Flat rate flights can also compete in auctions, but their eCPMs cannot be calculated. Instead, a fixed eCPM must be used.
If a flight has eCPM Optimization settings enabled, Adzerk will calculate an eCPM for the ads in the flight approximately every thirty minutes. (You can define the historical time period for the optimization and the default eCPM for this calculation using the eCPM Optimization settings).
To calculate the eCPM, we use the:
- Total revenue from the time period specified for the calculation
- The number of impressions the ad has served during the time period
We then multiply this eCPM by the multiplier value (if applicable).
If the calculated eCPM exceeds the max eCPM from the flight settings, will we use the max eCPM as the final eCPM for the auction. (Also, if the calculated eCPM is lower than the min CPM, we will use that instead.) Otherwise, we will use the calculated eCPM.
If a flight is still in burn-in mode, we will use the default eCPM specified on the flight.
Updated 2 years ago
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107872686.18/warc/CC-MAIN-20201020105000-20201020135000-00293.warc.gz
|
CC-MAIN-2020-45
| 3,049
| 24
|
https://www.aes.org/events/145/aoip/?ID=6465
|
code
|
AES New York 2018
AoIP Pavilion Session
Wednesday, October 17, 2:30 pm — 3:00 pm (AoIP Pavilion Theater)
Sample-Accurate Synchronization of SMPTE ST 2110 Audio StreamsPresenter:
Andreas Hildebrand, ALC NetworX GmbH - Munich, Germany
Detailed explanation of the synchronization fundamentals of ST 2110 and how these can be applied to achieve sample-accurate synchronization among audio streams.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511364.23/warc/CC-MAIN-20231004084230-20231004114230-00892.warc.gz
|
CC-MAIN-2023-40
| 395
| 6
|
http://kuroye.xyz/archives/2555
|
code
|
Topgallantnovel 《I’m Secretly Married to a Big Shot》 – Chapter 2282 – She Really Likes Him keen bathe recommend-p2
a mediaeval mystic saints
Novel–I’m Secretly Married to a Big Shot–I’m Secretly Married to a Big Shot
Chapter 2282 – She Really Likes Him rhyme approval
Not in the event it was smashed.
Mo s.h.i.+xiu, it is important to revisit shortly.
“Anyway, believe me. Nothing awful will occur.
“Wait personally, I’ll be back in the near future.” Mo s.h.i.+xiu installed up.
Bam! Bam! Bam!
She inserted a hands on her tummy. “Baby, Mum will definitely shield you. No one can damage you. Mother can do everything to suit your needs.”
He was her husband, the person closest to her now.
She could only confidence him.
“Without her, a woman who’s suitable for s.h.i.+xiu in most elements will definitely look in the future. Sibling Lin, this isn’t some thing you should bother about. I know well what I’m engaging in. This really is for s.h.i.+xiu’s personal excellent.”
Madam Mo sat about the furniture using a black phrase. “I’ve stated it just before, Jiang Luoli has got to depart s.h.i.+xiu.”
If she couldn’t confidence him, their marital relationship will be meaningless.
“Anyway, trust me. Nothing awful will occur.
Madam Mo checked up coldly. “My child is extremely spectacular, a person deserving of him must be comparable to him. Not alone is Jiang Luoli’s family history lowly, but she also doesn’t know her spot. Now, s.h.i.+xiu has produced an incredibly irrational choice. A real lady cannot always remain by his aspect.
He was her hubby, a person closest to her now.
“Without her, a girl who’s appropriate for s.h.i.+xiu in all of the aspects will surely appear in the future. Sister Lin, this isn’t some thing you ought to worry about. I understand well what I’m doing. It is for s.h.i.+xiu’s individual fantastic.”
Sibling Lin misplaced her composure.
“Luoli.” Mo s.h.i.+xiu called her name softly. “Are you able to believe me?”
Madam Mo appeared up coldly. “My son is so fantastic, anybody deserving of him should be comparable to him. Not only is Jiang Luoli’s family qualifications lowly, but she also doesn’t know her place. Now, s.h.i.+xiu has made a very irrational conclusion. A real woman cannot carry on and vacation by his section.
“Don’t be worried.” Mo s.h.i.+xiu comforted her softly. “As extended since you don’t abandon the study, they won’t be able to type in. On top of that, one can find bodyguards at your house. If something really comes about, they should secure you.
Due to the fact I’m really afraid.
Considering that Madam Mo wasn’t shifted at all, she reported anxiously and angrily, “Madam, cease this. It is not very past due.”
He was quoted saying which he would certainly give her a description.
She was ready to are convinced him.
Bam! Bam! Bam!
Jiang Luoli searched out of your home window and clenched her fist.
Considering that Madam Mo wasn’t shifted in any respect, she said anxiously and angrily, “Madam, stop this. It’s not very later.”
Mo s.h.i.+xiu, you should return in the near future.
Considering that Madam Mo wasn’t shifted in anyway, she claimed anxiously and angrily, “Madam, end this. It’s not very later.”
our mr. wrenn the romantic adventures of a gentle gentleman
However the newborn wasn’t created still, Jiang Luoli’s motherly appreciate was already turned on.
“Luoli.” Mo s.h.i.+xiu called her brand softly. “Are you able to trust me?”
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00769.warc.gz
|
CC-MAIN-2023-14
| 3,539
| 35
|
http://forums.zimbra.com/administrators/61392-pass-amavisd-header-check-disable-spam-using-score.html
|
code
|
I'm using ZCS 8.0.2 OSE running on Ubuntu 12.04 64bit.
I need a clue to by pass amavisd header check, because after restart the amavis service the configuration always back to default.
Also need to disable spam check using score. Like increase or decrease.
Many thanks for your clue.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278389.62/warc/CC-MAIN-20160524002118-00193-ip-10-185-217-139.ec2.internal.warc.gz
|
CC-MAIN-2016-22
| 283
| 4
|
https://globeinfo.live/ty-dolla-ign-shows-his-home-fridge-gym-gym-and-fridge-mens-health/
|
code
|
Ty Dolla $ign proves he puts in all the hard work to maintain the title of “Ultimate $ex $ymbol” when he shows us the contents of his home fridge and gym.
Ty Dolla $ign Shows His Home Fridge & Gym | Gym and Fridge | Men’s Health
Men’s Health Official Site: https://www.menshealth.com/
Men’s Health on Facebook: https://www.facebook.com/MensHealth/
Men’s Health on Twitter: https://twitter.com/MensHealthMag
Men’s Health on Instagram: https://www.instagram.com/menshealthmag/
Men’s Health on Pinterest: https://www.pinterest.com/menshealthmag/
#purpleemoji #tydollasign #gymandfridge #hottestinthecity
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570868.47/warc/CC-MAIN-20220808152744-20220808182744-00297.warc.gz
|
CC-MAIN-2022-33
| 616
| 8
|
http://forum.vectorlinux.com/index.php?topic=3150.msg19462
|
code
|
I am using Xmms 12.10, I want to know if there's a way to make it play aac+ streams. There are so many radios using this format, and it won't open any files of this type. Is there a way i could make it play them? And with the mp3 radios, if there's a problem with the connection, it just stops playing the radio, and won't buffer it again.
Please, if you know a way to make it work, tell me, the aac+ format has great quality at a low bitrate and it's the best alternative for mp3 stations.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118831.16/warc/CC-MAIN-20170423031158-00190-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 490
| 2
|
https://intuicell.com/research/acting-up-an-approach-to-the-study-of-cognitive-development
|
code
|
Acting Up: An Approach to the Study of Cognitive Development
Despite decades of research, we lack a comprehensive framework to study and explain cognitive development. The emerging “paradigm” of action-based cognition implies that cognitive development is an active rather than a passive, automatic, and self-paced maturational process. Importantly, “active” refers to both sensorimotor activity (in the narrow sense) as well as to autonomous exploration (e.g., as found in active perception or active learning). How does this emphasis on action affect our understanding of cognitive development? Can an action-based approach provide a much-needed integrative theory of cognitive development? This chapter reviews key factors that influence development (including sensorimotor skills as well as genetic, social, and cultural factors) and their associated brain mechanisms. Discussion focuses on how these factors can be incorporated into a comprehensive action-based framework. Challenges are highlighted for future research (e.g., problems associated with explaining higher-level cognitive abilities and devising novel experimental methodologies). Although still in its infancy, an action-based approach to cognitive development holds promise to improve scientific understanding of cognitive development and to impact education and technology.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474643.29/warc/CC-MAIN-20240225203035-20240225233035-00213.warc.gz
|
CC-MAIN-2024-10
| 1,352
| 2
|
https://icingonthecakeblog.com/magento-github-the-go-to-source/
|
code
|
Thanks for your interest in Magento Github!
I’m currently working on a detailed video to get you the latest information on this…
But in the meantime…
Checkout the best ClickFunnels offers, benefits, and features below.
In this day as well as age, the way your organisation runs online could make or damage you.
Just how do most organisations presently have a visibility online?
The fact of the matter is, however, that web sites have actually dramatically evolved over the past years – as well as the methods of old are no more sensible for contemporary organisation.
Back then, it would certainly be enough to have a simple website with a home page, services, pricing, regarding us, and also contact pages.
A potential client would go to your website, scroll about, visit the various pages and also eat web content as they please.
However, if you are a business investing any kind of cash on advertising, you intend to regulate what customers are learning about on your site, existing deals at the right time, as well as take full advantage of the income you make from each individual.
Exactly how does one accomplish this?
Utilizing sales funnels.
Get in ClickFunnels
ClickFunnels is the most convenient way making high converting sales as well as marketing funnels.
It is an unique device developed particularly to turn potential consumers into buyers.
It actually is an all-in-one remedy to create sales funnels and includes touchdown web pages, email combination, invoicing, webinars, subscription websites, therefore a lot more. Not surprising that it has swiftly become a favorite tool for marketing experts.
Below is my in-depth ClickFunnels Review, consisting of preferred functions, pricing, pros/cons, and comparisons versus competitors.
Magento Github: But First, What Exactly Is a Sales Funnel?
Sales funnels (additionally called advertising and marketing funnels) are multi-step projects that are developed to relocate potential prospects via your sales procedure, and transform them into purchasers.
Picture a real-life funnel. At the top, you pour liquid in, which narrows down towards one dealt with destination.
In sales, a similar event happens. At the top, site visitors get to your internet site, yet not all that get in make it from the various other end as customers.
Lots of things have to take place from the time a visitor enters your channel, to the moment they act and successfully finish an acquisition.
By damaging down the consumer’s trip into smaller sized actions, you can be much more exact concerning exactly how when you present an offer to your target market.
The detailed steps in a funnel may look something like this:
- Unqualified lead arrives at touchdown web page
- Page communicates the very first offer (something cost-free to collect an email).
- As soon as e-mail is accumulated, primary offer is pitched.
- Lead comes to be a paying consumer.
- Even more e-mail communication delivering customer worth.
- Further relationship building
- Persisting sales.
ClickFunnels also has a visuals that explains this in a basic method:.
What is ClickFunnels?
As pointed out earlier, ClickFunnels is the most effective sales funnel software around today.
The firm makes the bold insurance claim of providing you whatever you have to market, sell, and deliver your items online – and also they most definitely supply.
A conventional channel will certainly utilize an opt-in page (to accumulate e-mail addresses), an e-mail auto -responder (to send emails to your clients), plus an extra sales web page with some content as well as an order kind, potentially followed by extra content, a subscription site, and so on
. Formerly, you would need to make use of various systems and devices to accomplish all these tasks such as:
- Produce a website
- Find Hosting
- Find an autoresponder solution
- Locate membership website software program
- Find split-testing software application … and so on
But ClickFunnels deals with whatever with their system. You not only conserve a lots of cash by not having to get different products/services, however you likewise stay clear of the technical mess of having to set whatever up, as well as could focus on what’s actually important – growing your service.
ClickFunnels uses a Cost-free 14-Day Trial, so you reach check out the tool and truly see if it’s appropriate for your organisation.
* Rapidly Create Pages Utilizing Themes as well as Elements *.
Before obtaining also far, it’s important to comprehend that a funnel is a collection of web pages assembled in a calculated order, with the objective of transforming as several leads right into customers. As well as a page is merely a collection of numerous elements designed to get someone to take a specific activity.
ClickFunnels supplies more compared to 50 various elements to assist you develop the best page. The editor is incredibly simple to use as well as all you have to do is drag and go down different aspects on to the web page, and update the text and also appearance to fit your requirements – no coding skills needed!
ClickFunnels also makes your life less complicated by supplying you with a lots of complimentary layouts.
Actually, ClickFunnels gives over 37 types of pages for you to mix and match with. These are broken down into the complying with 10 groups:
- Presell Pages: Study Page, Article Web Page, Presell Page, Clickpop Web Page
- Optin Pages: Squeeze Web Page, Reverse Squeeze Web Page, Lead Magnet, Voucher
- Thank You Pages: Thank You Page, Deal Wall Surface, Bridge Page, Share Page
- Sales Pages: Video Clip Sales Page, Sales Letter Web Page, Item Launch Web Page
- OTO Pages: Upsell Page, Downsell Page
- Order Forms: 2 Step Order Page, Traditional Order Page, Video Clip Sales Letter Order Web Page, Sales Letter Order Web Page, Product Release Order Web Page
- Webinar Pages: Webinar Registration Page, Webinar Confirmation Page, Webinar Program Room, Webinar Replay Area
- Membership Pages: Gain Access To Page, Member’s Location
- Associate Pages: Access Web Page, Affiliate Location
- Various Other Pages: Application Page, Ask Web Page, Shop Front, Web Page, Hero Web Page, Hangout Page, Live Demonstration Web Page
The pre-built design templates are completely adjustable, and also are just what most individuals utilize.
You have the ability to pick a template, edit or change the elements with your personal, and your brand-new page prepares to go.
You can also link any type of channel you produce with your personal email marketing solution (if you don’t use the one included in ClickFunnels), and make use of the ClickFunnels integrated in billing system.
This is likewise a good time to point out that ClickFunnels gives extremely useful and also easy to understand training video clips when you first subscribe. I very advise going through those since they promptly enable you to make use of the tool at its complete capability, and also you’ll have extra fun playing around. Magento Github
* Develop One-Click Membership Websites *.
Among the very best functions with ClickFunnels is the capability to quickly create subscription sites and deliver web content to your target market in one area.
Your subscription site will certainly come full with registration pages, subscription accessibility pages, and material web pages which you can conveniently lock or drip feed to your customers inning accordance with purchases they made in your funnel.
ClickFunnels membership sites allow you to send e-mails, easily manage your emails, and also construct a neighborhood all while eliminating the stress that’s related to various other options such as Kajabi, or WordPress systems.
It’s really convenient to not have to buy a different software application or plugin to produce membership websites.
* Email Assimilation & Actionetics *.
With every channel comes email list building chances.
ClickFunnels supports e-mail combination with all of the major email automation platforms such as:
- Active Campaign
- Constant Contact
- Get Response
- Mad Mimi
- Market Hero
- And others
Nonetheless, ClickFunnels additionally has their very own powerful automation tool called Actionetics.
Although you could create, routine, and supply e-mails just like other e-mail advertising and marketing platform, Actionetics is a lot a lot more.
I like Actionetics due to the fact that it not only changes your email advertising yet messenger advertising as well as SMS marketing software applications as well. This takes automation to a whole brand-new degree and helps you communicate the excellent message to your customers, exactly when they require it. A video clip review of Actionetics will be provided even more below.
* Invoicing and Settlement Assimilation *.
An incredible feature within ClickFunnels is the capacity to collect all the invoicing details from your consumers exactly on your sales page. Marketing is made so much simpler when customers do not need to leave your website.
ClickFunnels incorporates with major settlement entrances such as PayPal, Red Stripe, and also InfusionSoft, to name a few.
While you can begin with the Free 14-Day Trial, there are three various rates options with ClickFunnels:
- $ 97/month.
- $ 297/month.
- $ 997 mass price cut strategy (advised).
I’ll explain for each of these strategies below.
1. ClickFunnels Requirement Plan – $97/month.
The conventional plan includes all of the functions you would certainly require within ClickFunnels, however with limitations on the number of funnels (20) and web pages (100) you can have in your account, in addition to the number of site visitors (20K) can view your web pages monthly.
You additionally do not obtain innovative performance such as ClickFunnels own email advertising and marketing and associate tracking devices.
2. ClickFunnels Etison Collection – $297/month.
This plan includes all the bells and whistles of the typical plan, without any limitations. It also includes 2 additional products created by ClickFunnels called Actionetics (email advertising and marketing) and also Knapsack (associate monitoring system).
In Actionetics – you can handle all of your contacts that register for your list, send out email broadcasts, and create a host of other automations. Magento Github
The difference in between the two plans truly is the limitations, as well as Actionetics/Backpack. If you are a fundamental user and don’t expect to make use of more than 20 funnels in your account – the Criterion Plan must be enough.
However, if you plan to have an associate program or intend to keep your e-mail marketing within ClickFunnels and also not make use of a 3rd party software program, the Etison Suite is for you.
You could constantly begin on the lower plan and also upgrade if needed.
3. Funnel Hacks System – $997
For anybody that’s severe regarding their company, the ClickFunnels Funnel Hacks System is the offer of the century.
The $997 Funnel Hacks System includes robust training programs bundled with 6-month access to the ClickFunnels Etison Collection.
This is an amazing offer given that outside of this program, 6-months of Etison Suite alone would certainly cost you $1782.
Not only are you conserving $785 yet you’re obtaining a lots of trainings and guides to help you obtain the most from ClickFunnels!
ClickFunnels Advantages And Disadvantages
- Developing funnels is extremely simple, easy, as well as quick
- All-in-one platform with everything your service needs to win
- Split screening and conversion monitoring included
- Email assimilation with all the significant e-mail autoresponder systems
- Settlement processing capacities within ClickFunnels
- ClickFunnels is always adapting and upgrading to the moments
- There is constantly support readily available (whether real-time or otherwise).
- Exceptionally active Facebook Team Area.
- Free 14-Day Test – allows you to try it risk complimentary.
- As amazing as ClickFunnels is, it most definitely is not an economical solution and also requires a constant membership to make use of
- Support isn’t really always the fastest as well as may extract from 1 minute to 24-HOUR relying on the issue.
ClickFunnels vs. Everyone Else.
Many individuals ask how ClickFunnels compares to other landing web page contractors such as Leadpages, Unbounce, as well as Infusionsoft.
For the most part it’s not truly a fair comparison due to the fact that each of these tools excels is one location or the various other.
The chart above gives a detailed evaluation – but I’ll highlight several of the major contrasts listed below.
ClickFunnels vs Leadpages
Before ClickFunnels, Leadpages was the huge pet dog.
Leadpages is simply a lead capture software – absolutely nothing more. You can develop landing pages, lead boxes, accumulate leads … that’s practically it. Additionally, the Leadpages layouts are additionally limited on customization.
ClickFunnels is far more versatile – it’s much easier to make use of as well as does so far more compared to produce lead capture web pages.
Put simply, Leadpages is actually just a landing web page builder, while ClickFunnels is focused around developing extremely integrated funnels.
ClickFunnels vs Infusionsoft
Infusionsoft is not a touchdown page or sales page building contractor. It has some of that performance built it, but that’s not exactly what it’s known for.
At it’s core, Infusionsoft is a CRM platform – one that allows you to handle your entire client data source. ClickFunnels has this capacity with Actionetics, however it’s not nearly as progressed as Infusionsoft.
Infusionsoft is likewise very costly as well as forces every new client to pay $2000 for a compulsory kickstart coaching plan just to learn ways to use the intricate system (which is infamously challenging to utilize).
ClickFunnels Affiliate Program
There are 2 major paths individuals decrease as ClickFunnels customers.
Those that choose to make use of the tool for their business – in hopes of eventually attain the Two Comma Club (over $1M in profits).
And also those that have an interest in making passive income as a ClickFunnels Associate as well as winning the Desire Vehicle Contest (where they pay $500/$1000 towards your desire automobile if you get to 100/200 energetic monthly signups, specifically).
With a whopping 40% month-to-month persisting commission, ClickFunnels easily has one of the best affiliate programs of any kind of system out there.
That’s right – you make money a continuous 40% compensation on every associate signup you make via the ClickFunnels Associate Program. Yet, exactly what does that really relate to?
The basic strategy is a $97/month investment and also the Etison Collection strategy is a $297/month investment. as a result you make $38.80 each basic plan and also $118.80 per Etison Collection strategy … every single month!
Typically, every 100 signups will certainly generate $4000/month in associate compensations (more or less depending on the number of Etison Plan individuals remain in there).
The Bottom Line
ClickFunnels is pass on the greatest platform if you are wanting to quickly build high converting sales funnels.
Due to the fact that it was developed from scratch to be the very best sales channel contractor, it defeats all of the competitors in that regard.
Externally, it could not be the most inexpensive item around – but if you utilize it to its full capability, your business will certainly come to be extra profitable and also you will conserve money from not needing to use various other devices. Magento Github
If you have actually reviewed this far right into my ClickFunnels Evaluation, I suggest you see on your own with a Free 14-Day Test right here.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573519.72/warc/CC-MAIN-20190919122032-20190919144032-00223.warc.gz
|
CC-MAIN-2019-39
| 15,776
| 140
|
http://bigdata.sys-con.com/node/2532838
|
code
|
|By Business Wire||
|February 7, 2013 11:58 AM EST||
Organizations across the globe have little insight into the true cost or value of their unstructured data, according to research by Barclay T. Blair of ViaLumina published today by Nuix, a worldwide provider of information management software. This lack of insight is becoming a critical risk as information volumes grow uncontrollably, regulatory pressures increase and the insights gained from Big Data separate winners and losers in the global economy.
The paper, “The Total Cost of Owning Unstructured Information: Decoding Information Governance, Big Data & E-Discovery,” advances a comprehensive new model for calculating information cost and value. The report challenges the widely held notion that organizations should keep all information forever because: storage is cheap; courts or regulators will penalize them for deleting information; or they will be able to extract value from the information at some uncertain future date.
“We are creating information faster than we are inventing ways to effectively manage it,” said Blair. “This creates enormous complexity and risk, while making it extremely difficult to generate value from information. Our paper is designed to help organizations close this gap with practical methods to improve their information governance.”
Blair’s research examines hidden costs such the loss of business opportunities when enterprises cannot easily find and use their own information. It reveals how the true costs are spread across all areas of a business and how organizations do not realistically evaluate data-related risks such as the massive expense and disruption of eDiscovery.
The paper also advances intriguing ideas such as:
- The Information Calorie. How organizations can foster changes in employee behavior that reduce information hoarding and mismanagement.
- Information Cap and Trade. Calculating and allocating information costs in a way that creates economic incentives for better information management and governance.
- Full Cost Accounting for Information. Borrowing an economic model from waste management to reveal the true costs of information mismanagement.
“By 2015, if not before, the growing costs and risks will have forced most large organizations to change the way they approach retaining, valuing and disposing of their data,” said Eddie Sheehy, CEO of Nuix. “Barclay Blair’s report has delivered innovative ideas and practical frameworks that will progress the way information governance vendors and practitioners think and work for many years to come.”
To read the report, visit www.nuix.com/TCOreport.
ViaLumina is a specialized professional services practice offering
Information Governance consulting services. We help our clients maximize
the value of information and comply with legal requirements for its
management. ViaLumina accomplishes this by providing a range of Records
and Information Management (RIM), compliance, assessment,
implementation, training and other professional services.
Nuix is a worldwide provider of information management technologies,
including eDiscovery, electronic investigation and information
governance software. Nuix customers include the world’s leading advisory
firms, litigation support providers, enterprises, government
departments, law enforcement agencies, and all of the world’s major
corporate regulatory bodies.
In past @ThingsExpo presentations, Joseph di Paolantonio has explored how various Internet of Things (IoT) and data management and analytics (DMA) solution spaces will come together as sensor analytics ecosystems. This year, in his session at @ThingsExpo, Joseph di Paolantonio from DataArchon, will be adding the numerous Transportation areas, from autonomous vehicles to “Uber for containers.” While IoT data in any one area of Transportation will have a huge impact in that area, combining sensor...
Oct. 22, 2016 06:45 AM EDT Reads: 352
The Internet of Things (IoT), in all its myriad manifestations, has great potential. Much of that potential comes from the evolving data management and analytic (DMA) technologies and processes that allow us to gain insight from all of the IoT data that can be generated and gathered. This potential may never be met as those data sets are tied to specific industry verticals and single markets, with no clear way to use IoT data and sensor analytics to fulfill the hype being given the IoT today.
Oct. 22, 2016 06:30 AM EDT Reads: 2,176
@ThingsExpo has been named the Top 5 Most Influential M2M Brand by Onalytica in the ‘Machine to Machine: Top 100 Influencers and Brands.' Onalytica analyzed the online debate on M2M by looking at over 85,000 tweets to provide the most influential individuals and brands that drive the discussion. According to Onalytica the "analysis showed a very engaged community with a lot of interactive tweets. The M2M discussion seems to be more fragmented and driven by some of the major brands present in the...
Oct. 22, 2016 06:15 AM EDT Reads: 11,202
SYS-CON Events announced today that SoftNet Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. SoftNet Solutions specializes in Enterprise Solutions for Hadoop and Big Data. It offers customers the most open, robust, and value-conscious portfolio of solutions, services, and tools for the shortest route to success with Big Data. The unique differentiator is the ability to architect and ...
Oct. 22, 2016 05:45 AM EDT Reads: 447
DevOps is speeding towards the IT world like a freight train and the hype around it is deafening. There is no reason to be afraid of this change as it is the natural reaction to the agile movement that revolutionized development just a few years ago. By definition, DevOps is the natural alignment of IT performance to business profitability. The relevance of this has yet to be quantified but it has been suggested that the route to the CEO’s chair will come from the IT leaders that successfully ma...
Oct. 22, 2016 05:45 AM EDT Reads: 16,253
SYS-CON Events announced today that Pulzze Systems will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Pulzze Systems, Inc. provides infrastructure products for the Internet of Things to enable any connected device and system to carry out matched operations without programming. For more information, visit http://www.pulzzesystems.com.
Oct. 22, 2016 05:00 AM EDT Reads: 2,472
In the next forty months – just over three years – businesses will undergo extraordinary changes. The exponential growth of digitization and machine learning will see a step function change in how businesses create value, satisfy customers, and outperform their competition. In the next forty months companies will take the actions that will see them get to the next level of the game called Capitalism. Or they won’t – game over. The winners of today and tomorrow think differently, follow different...
Oct. 22, 2016 04:30 AM EDT Reads: 803
One of biggest questions about Big Data is “How do we harness all that information for business use quickly and effectively?” Geographic Information Systems (GIS) or spatial technology is about more than making maps, but adding critical context and meaning to data of all types, coming from all different channels – even sensors. In his session at @ThingsExpo, William (Bill) Meehan, director of utility solutions for Esri, will take a closer look at the current state of spatial technology and ar...
Oct. 22, 2016 03:30 AM EDT Reads: 1,664
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
Oct. 22, 2016 01:15 AM EDT Reads: 814
SYS-CON Events announced today that Streamlyzer will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Streamlyzer is a powerful analytics for video streaming service that enables video streaming providers to monitor and analyze QoE (Quality-of-Experience) from end-user devices in real time.
Oct. 22, 2016 01:15 AM EDT Reads: 898
@ThingsExpo has been named the Top 5 Most Influential Internet of Things Brand by Onalytica in the ‘The Internet of Things Landscape 2015: Top 100 Individuals and Brands.' Onalytica analyzed Twitter conversations around the #IoT debate to uncover the most influential brands and individuals driving the conversation. Onalytica captured data from 56,224 users. The PageRank based methodology they use to extract influencers on a particular topic (tweets mentioning #InternetofThings or #IoT in this ...
Oct. 22, 2016 01:00 AM EDT Reads: 8,168
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in Embedded and IoT solutions, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 7-9, 2017, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and ...
Oct. 22, 2016 12:30 AM EDT Reads: 3,495
Cloud based infrastructure deployment is becoming more and more appealing to customers, from Fortune 500 companies to SMEs due to its pay-as-you-go model. Enterprise storage vendors are able to reach out to these customers by integrating in cloud based deployments; this needs adaptability and interoperability of the products confirming to cloud standards such as OpenStack, CloudStack, or Azure. As compared to off the shelf commodity storage, enterprise storages by its reliability, high-availabil...
Oct. 22, 2016 12:15 AM EDT Reads: 973
The IoT industry is now at a crossroads, between the fast-paced innovation of technologies and the pending mass adoption by global enterprises. The complexity of combining rapidly evolving technologies and the need to establish practices for market acceleration pose a strong challenge to global enterprises as well as IoT vendors. In his session at @ThingsExpo, Clark Smith, senior product manager for Numerex, will discuss how Numerex, as an experienced, established IoT provider, has embraced a ...
Oct. 22, 2016 12:00 AM EDT Reads: 941
When people aren’t talking about VMs and containers, they’re talking about serverless architecture. Serverless is about no maintenance. It means you are not worried about low-level infrastructural and operational details. An event-driven serverless platform is a great use case for IoT. In his session at @ThingsExpo, Animesh Singh, an STSM and Lead for IBM Cloud Platform and Infrastructure, will detail how to build a distributed serverless, polyglot, microservices framework using open source tec...
Oct. 21, 2016 11:45 PM EDT Reads: 4,423
November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Penta Security is a leading vendor for data security solutions, including its encryption solution, D’Amo. By using FPE technology, D’Amo allows for the implementation of encryption technology to sensitive data fields without modification to schema in the database environment. With businesses having their data become increasingly more complicated in their mission-critical applications (such as ERP, CRM, HRM), continued ...
Oct. 21, 2016 11:45 PM EDT Reads: 880
SYS-CON Events announced today that Cloudbric, a leading website security provider, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Cloudbric is an elite full service website protection solution specifically designed for IT novices, entrepreneurs, and small and medium businesses. First launched in 2015, Cloudbric is based on the enterprise level Web Application Firewall by Penta Security Sys...
Oct. 21, 2016 11:45 PM EDT Reads: 996
You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data. In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ...
Oct. 21, 2016 10:00 PM EDT Reads: 4,225
A completely new computing platform is on the horizon. They’re called Microservers by some, ARM Servers by others, and sometimes even ARM-based Servers. No matter what you call them, Microservers will have a huge impact on the data center and on server computing in general. Although few people are familiar with Microservers today, their impact will be felt very soon. This is a new category of computing platform that is available today and is predicted to have triple-digit growth rates for some ...
Oct. 21, 2016 10:00 PM EDT Reads: 33,906
SYS-CON Events announced today that Enzu will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Enzu’s mission is to be the leading provider of enterprise cloud solutions worldwide. Enzu enables online businesses to use its IT infrastructure to their competitive advantage. By offering a suite of proven hosting and management services, Enzu wants companies to focus on the core of their online busine...
Oct. 21, 2016 09:15 PM EDT Reads: 1,211
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718957.31/warc/CC-MAIN-20161020183838-00529-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 14,135
| 64
|
http://www.ironpaper.com/webintel/articles/blogging-today-the-choices-you-dont-know-about/
|
code
|
Although social conversation, utilizing networks like Twitter, StumbleUpon and Facebook, have affected how we publish content online, blogging, and the integration of blog technologies, remains an important cornerstone of the web. Social networks help build audience and extend the reach of content, but established blogs link to or aggregate smaller sites, sending viewers to read more and produce original content.
Of course there are the well known blogging platforms like WordPress, Blogger and Tumblr. WordPress is the leader in blogging. The platform powers almost 19 percent of the Web and has been downloaded more almost 50 million times. There’s Blogger, an easy-to-use, free platform that requires only a Google account to get started. And there’s the “hip” choice, Tumblr, the first mainstream service to combine blogging and social media. Tumblr has a strong community of users and content can be easily re-blogged making it easy to curate rather than relying solely on producing original content. There’s also Medium started by Twitter founders Ev Williams and Biz Stone. It has a feature letting users edit and annotate other people’s work.
But there’s other blogging platforms out there that are bringing fresh ideas and opportunities. Here’s a few to consider if you’re looking to launch a new blog.
- JUX is a great blogging platform for portfolios, business, and events. Slideshows can share images and display content beautifully.
- Jekyll. GitHub pages are powered by Jekyll, so a Jekyll blog site using GitHub can easily be deployed.
- Squarespace is a blogging platform popular with business users. Developing and hosting a blog is core, but the platform can also be used to create and manage a range of websites, such as an e-commerce site.
- BookLikes is a blog platform designed for book lovers. It helps people share their reading likes and discover new books through personal review.
- Postach.io. This blogging service works with Evernote, allowing users to write posts using a dedicated notebook. Postach.io hooks into the comment engine Disqus, supports Google Analytics, allows for custom domains, and social sharing.
- SETT is a community-focused blogging platform that promises engagement. It claims it can help writers get 98 percent more comments and more attention based on its community of users.
- Ghost is an open-sourced blogging platform born from Kickstarter. The platform has garnered praise for its elegance and promises “the full Ghost software with all bells, whistles, themes, plugins, and some extras that are only available with us,”.
- TypePad has a small monthly fee and includes design templates, unlimited storage and customer service. Typepad emphasizes reliability, and is an ‘out-of-the-box’ service.
- Blog.com, with more than 2 million bloggers, has a lots of features like stats, domain redirection, personal favicons, lots of themes, lots of widgets, multi-author blogs, post from mobile and more.
- Posthaven arose from the shutdown of Posterous, The service is a ‘light-blogging’ experience, allowing posts and multimedia to be easily and quickly published. Blog pages can be open or permission based so it can be public or only for friends.
- Textpattern is an open source content management system that allows you to easily create, edit and publish content and make it beautiful in a professional, standards-compliant manner.
- Squidoo is a free platform that makes it easy to publish your interests and earn a royalty. It’s a knowledge based blogging platform.
- Hubpages is a write and earn blogging platform for authors to share advice, reviews and tips with hundreds of other authors.
- Weebly is an easy to use and affordable platform for blog creation including a WYSIWYG page builder, media integration, domain management in an easy to use interface.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120092.26/warc/CC-MAIN-20170423031200-00529-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 3,853
| 17
|
https://www.libhunt.com/compare-neodo.nvim-vs-nvim-projectconfig?ref=compare
|
code
|
|4 days ago||9 months ago|
|MIT License||MIT License|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
First time developing a plugin, have some questions
2 projects | reddit.com/r/neovim | 14 May 2022
How do I set project-specific keymaps?
3 projects | reddit.com/r/neovim | 27 Dec 2021
Another way is to add nvim config to each project using this plugin: https://github.com/windwp/nvim-projectconfig. Or just search for "vim project based config" in some search engine.
Linting when contributing to projects with different styling guides?
3 projects | reddit.com/r/neovim | 2 Nov 2021
How do you stop LSP clients?
2 projects | reddit.com/r/neovim | 13 Sep 2021
Or safer than .nvimrc.. use plugin nvim-projectconfig
AutoSource: Manage Vim configuration for local projects
5 projects | reddit.com/r/neovim | 27 May 2021
why should one use this over https://github.com/windwp/nvim-projectconfig which is mostly in lua
Project based config in vim
4 projects | dev.to | 1 Apr 2021
Plugin: nvim project config
Project config plugin
7 projects | reddit.com/r/neovim | 29 Mar 2021
https://github.com/windwp/nvim-projectconfig lol why you need to put projectconfig to same folder with yourcode. you can put it in your .config/nvim/projects and only you can see it and you don't care anything about security.
How to temporarily disable lsp?
1 project | reddit.com/r/neovim | 23 Feb 2021
this is a reason i wrote that plugin https://github.com/windwp/nvim-projectconfig
What are some alternatives?
vim-editorconfig - Yet another EditorConfig (http://editorconfig.org) plugin for vim written in vimscript only
editorconfig-vim - EditorConfig plugin for Vim
vim-sleuth - sleuth.vim: Heuristically set buffer options
neovim-session-manager - A simple wrapper around :mksession
nvim-lspconfig - Quickstart configs for Nvim LSP
project-config.nvim - Per project config for Neovim
vim-addon-local-vimrc - kiss local vimrc with hash protection
vim-localvimrc - Search local vimrc files (".lvimrc") in the tree (root dir up to current dir) and load them.
vim-autosource - Manage Vim configuration for projects.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571472.69/warc/CC-MAIN-20220811133823-20220811163823-00643.warc.gz
|
CC-MAIN-2022-33
| 2,422
| 37
|
https://easypano.com/forum/2/2027.html?page=last
|
code
|
P.S. On another note - a basic question about pano buttons.
How do you slow down the zoom and pan speed when a button is pressed? I know how to slow down the auto pan - but there doesn't seem to be an option for activated buttons.
In the ShotSpot string for the < and > arrows [These are from my pano page}
<PARAM name = shotspot0 value = " x323 y230 a343 b250 u'ptviewer:startAutoPan(-0.2,0,1.0)' ">
<PARAM name = shotspot1 value = " x344 y230 a364 b250 u'ptviewer:startAutoPan(0.2,0,1.0)' ">
These parameter over ride the normal initial view AUTO param.
Note (pan_inc,tilt_inc,zoom) values. inc means increment or step. The pan increment can be a negative value so -0.2 means to rotate to the left at 2ths of a degree per second. A positive value is rotating to the right.
The middle value for TILT default value 0 is pointing directly to center of the image on the horizon (50% down from the top of the image). So when you change the value from 0 to -10 it will rotate and tilt down 10 degrees [recommended for special situations only]. Since its an INCREMENT value the NEXT button click.. it will increment by 10 more or step once again from its last tilt down value (accumulated 20 degrees).
And the last value is Zoom so you can increment (step in farther with each zoom in button press) Zoom in/out Default is 1.0 or normal.
Oh and for music or audio I would suggest using a FLASH streaming MP3 [made using Swish] music loop rather than a 'full length song'.
There are sample ON/OFF buttons in the Swishzone.com library. Yes, it gets rather hard hearing the audio over and over... sometimes need peace and quiet - so click on the OFF button.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683683.99/warc/CC-MAIN-20220707033101-20220707063101-00632.warc.gz
|
CC-MAIN-2022-27
| 1,648
| 11
|
https://www.mail-archive.com/vchkpw@inter7.com/msg18098.html
|
code
|
Hi list, I've been setting up a new mailserver for us in the last days and have been running into troubles. Just to mention: A quite similar installation (without mysql-support and some other patches) is already running for about one year.
Everything is working fine except relaying mails with authentication. I used plain qmail 1.03 with the following patches: big-concurrency.patch qmail-0.0.0.0.patch qmail-maildir++-universal.patch sendmail-flagf.patch big-ext-todo-20030101 [( qmail-1.03-starttls-smtp-auth.patch ) <= replaced it with the two corresponding single patches] qmail-smtpd-auth-0.31 tarpit.patch doublebounce-trim.patch qmail-1.03-tls-20021228.patch qmail-1.03.errno.patch qmail-smtpd-relay-reject patch-qmail-badmailfrom-wildcard qmail-1.03.qmail_local.patch qmailqueue-patch After that, I installed vpopmail with the following options: ./configure --disable-roaming-users --enable-logging=v --disable-ip-alias-domains --disable-paswd --enable-clear-passwd --enable-auth-module=mysql --enable-auth-logging=y --enable-mysql-logging=y --disable-mysql-limits --enable-valias --enable-vpopuser=vpopmail --enable-vpopgroup=vpopmail I can login and authenticate by pop3(s) and imap(s) ! My run-script for qmail-smtpd is: #!/bin/sh VPOPMAILUID=`id -u vpopmail` VPOPMAILGID=`id -g vpopmail` MAXSMTPD=`cat /var/qmail/control/concurrencyincoming` exec /usr/local/bin/softlimit -a 20000000 \ /usr/local/bin/tcpserver -vv -P -H -R -c "$MAXSMTPD" -l mail03.our-domain.tld -x /etc/tcp.smtp.cdb \ -u "$VPOPMAILUID" -g "$VPOPMAILGID" 0 25 \ /var/qmail/bin/qmail-smtpd \ mail03.our-domain.tld /var/vpopmail/bin/vchkpw /bin/true 2>&1 My mysql-log shows: /usr/sbin/mysqld: ready for connections. Version: '4.0.20' socket: '/var/run/mysqld/mysqld.sock' port: 3306 040521 21:23:32 Aborted connection 1 to db: 'vpopmail' user: 'vpopmail' host: `mail03.our-domain.tld' (Got an error reading communication packets) 040521 21:23:32 Aborted connection 2 to db: 'vpopmail' user: 'vpopmail' host: `mail03.our-domain.tld' (Got an error reading communication packets) Mysql is running on another server and I am able to login from the mail-server to the mysql-server with the defined user. Rights are granted as documented in the corresponding vpopmail-Readme (for mysql). If I try to connect with any mailclient, I am always asked for username/password which is provided correctly. Server capabilities are shown as Security: None OR TLS Password: PLAIN (since I commented out CRAM-MD5 in the corresponding source). Does anyone have a clue where the problem could be or give me any hints for solving it? Greetings Tobias
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217909.77/warc/CC-MAIN-20180821014427-20180821034427-00625.warc.gz
|
CC-MAIN-2018-34
| 2,608
| 2
|
https://www.janeandphil.com/2017/01/quiet-hours.html
|
code
|
Our very nice Greek neighbors (no sarcasm, I truly mean that) are definitely going to be told off for this one. Not by me, because they're not waking up my infant, but definitely by the Americans underneath.
I should point out (because I know someone's going to be like "10:45, what are you, a granny?!") that this is the beginning of the party. People just started arriving a half hour ago. Winter quiet hours began at 10.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585507.26/warc/CC-MAIN-20211022114748-20211022144748-00597.warc.gz
|
CC-MAIN-2021-43
| 423
| 2
|
https://community.freepbx.org/t/extension-specific-blacklist/6643
|
code
|
I want to have the blacklist for a specific extension. The problem is that if you blacklist a number, it is for the whole box and not an individual extension.
Let us say we have 3 extensions, 1. CEO 2. Customer Service 3. Sales
The CEO can blacklist numbers but those numbers should not be blacklisted for Sales and CS
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00079.warc.gz
|
CC-MAIN-2022-40
| 318
| 3
|
http://emix8.org/forum/viewtopic.php?f=1&t=1376
|
code
|
I've been experimenting with making Guis in Z game editor
and I'd like to make a game in the style of wolvenstein (just for a bit of fun) where the players weapon is visible in front of the camera, but I've been kinda struggling to get this to work.
Basically I want to have a flat object facing the camera on X Y and Z axis,
I've been using this code, which works for z and x axis, but not Y.
Sorry to post this again, I think Kjell might have answered this very same question a few years ago, but I can't find the code.
Code: Select all
float a, s, c, r, x, y; a = App.CameraRotation.Y*PI*2; s = sin(a); c = cos(a); r = tan(App.FOV/360*PI); // If you're using a constant FOV, you can swap out this calculation with the resulting value. x = App.MousePosition.X*r*App.ViewportWidth/App.ViewportHeight; // If you're using a constant aspectRatio, you can swap out "App.ViewportWidth/App.ViewportHeight" with a specific value. y = App.MousePosition.Y*r; cursor.position.X = App.CameraPosition.X+x*c+s; cursor.position.Y = App.CameraPosition.Y+y; cursor.position.Z = App.CameraPosition.Z+x*s-c; // If you want the box to mimic the orientation of the camera, un-comment the following lines cursor.rotation.z = 0-App.CameraRotation.z; cursor.rotation.X = 0-App.CameraRotation.X; cursor.rotation.Y = 0-App.CameraRotation.Y;
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347436828.65/warc/CC-MAIN-20200604001115-20200604031115-00240.warc.gz
|
CC-MAIN-2020-24
| 1,316
| 7
|
https://davidroessli.com/logs/2009/03/microsoft_sustainability/
|
code
|
23 Mar 2009
posted in daily
May 2009: Apparently this video has been pulled off Vimeo. You can find it in the Work section of Oh, Hello.
This is a cool presentation that would have found its place at LIFT 2009 in Geneva.
More details and the whole KeyNote of Stephen Elop, Président de Microsoft Business Division on TrendsNow.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249595829.93/warc/CC-MAIN-20190224044113-20190224070113-00462.warc.gz
|
CC-MAIN-2019-09
| 328
| 5
|
http://mail-index.netbsd.org/tech-toolchain/2014/09/12/msg002434.html
|
code
|
On Fri 12 Sep 2014 at 09:32:53 +0200, Martin Husemann wrote: > The issues are generic, random race conditions in the NetBSD build system Aha! Those may be related then to the issues I've seen whenever I tried parallel builds on my own machine, that apparently never happened to anyone else, but which disappeared if I omitted the -j option. For one of those I found a mail I wrote: http://mail-index.netbsd.org/current-users/2011/05/03/msg016570.html More recently I had a mysterious -j failure where I had to remove a small subree in my objdir before a non-j make worked again. Looking through the logfile, I now see this, a bit before a compile failure: compile netpgpverify/bufgap.o create amd/ops_efs.d create amd/ops_mfs.d compile libgroff/uniuni.o compile netpgpverify/digest.o create amd/ops_nfs.d create amd/ops_nfs3.d compile netpgpverify/libverify.o create amd/ops_nullfs.d create amd/ops_pcfs.d compile libgroff/version.o build libgroff/libgroff.a eval: Cannot fork eval: Cannot fork eval: Cannot fork eval: Cannot fork eval: Cannot fork : permission denied create amd/ops_tfs.d dependall ===> gnu/usr.bin/groff/src/libs/libdriver create libdriver/input.d create libdriver/printer.d create amd/ops_tmpfs.d and later link grodvi/grodvi dvi.o: In function `dvi_font::~dvi_font()': dvi.cpp:(.text+0x18): undefined reference to `font::~font()' A failure has been detected in another branch of the parallel make Maybe the parallelism got a bit out of control there with the "cannot fork" errors? -Olaf. -- ___ Olaf 'Rhialto' Seibert -- The Doctor: No, 'eureka' is Greek for \X/ rhialto/at/xs4all.nl -- 'this bath is too hot.'
Description: PGP signature
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530066.45/warc/CC-MAIN-20220519204127-20220519234127-00560.warc.gz
|
CC-MAIN-2022-21
| 1,658
| 2
|
http://gizcore.com/demo.html
|
code
|
| | | Want Now | like | root | Health | and | | | | | Old Zhao is | ||| OF THE PEOPLE | Home | IT | Learn | A | A | A | | Hand | Arts |! | |.
| Divisions | Fu | | Money | to | | A || || in this."|| | | | In to shoot | Pat | Body | | | | | cotton Algeria templatemo. Thank you.
Owner & Manager
|||| that the Maya | Canada |||| thick old Cho | First | | | is also true | Tae | homes | A | A ||||!" Huang | Hair | Eyes | all | colon | | a |.
Not | | | | | | Fu Road, on the night | | | | | sewing cotton | Category | ||| of the anti-| | | | | this is not less | was | to | | cover | Rooms | | | |||" in | | | Road to say |.
| Customize | So | | | there is a Fu | | | | | not to be | | | | you don't have | who | | A | Direct | A |||| in also not | | | | who.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038863420.65/warc/CC-MAIN-20210419015157-20210419045157-00576.warc.gz
|
CC-MAIN-2021-17
| 755
| 6
|
https://www.modis.com/en-us/job-seekers/job/austin/net-c-developer-midlevel-35-years/US_EN_6_983168_1323495/
|
code
|
As a member of the Client Facing Applications team, you will join a software development organization that owns multiple applications that directly interface with our user community, including our flagship account management web site Workplace.
You will join an Agile team of software engineers focused on the Software Development Lifecycle of our applications ensuring high engagement and high performance in application delivery.
You will be responsible for software development and testing at the direction of senior staff throughout the Software Development Lifecycle.
You will be part of a team where everyone is committed to an attitude of "whatever it takes" to deliver quality software on time and on budget including design, coding, testing, deployment and certification.
You will participate in program level, project level, and technical governance processes to ensure our applications meet and exceed expectations.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886121.45/warc/CC-MAIN-20200704104352-20200704134352-00166.warc.gz
|
CC-MAIN-2020-29
| 926
| 5
|
http://www.yelp.co.uk/search?find_desc=Property&find_loc=Crawley+Down&utm_campaign=qype_uk&utm_source=%28direct%29
|
code
|
Was thoroughly impressed with not only the service i recieved, but also the price. Before you buy anywhere else, i would definatly try Alfresco. You wont be dissapointed.
This user has arrived from Qype, a European company acquired by Yelp in 2012. We have integrated the two sites to bring you one great local experience.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776436274.65/warc/CC-MAIN-20140707234036-00005-ip-10-180-212-248.ec2.internal.warc.gz
|
CC-MAIN-2014-23
| 322
| 2
|
https://www.spellsofmagic.com/read_post.html?post=405259
|
code
|
Too many books and articles are solving this matter, but get me confused...where I`d do the following and when,how many times?
and these my ponder points...
1-At invoking, I see that the author said(invoke air at east turn to north and invoke the earth at north and...so on till close the circle), I think by this manner I invoke the 4 elements at once...so when I should use this one?and what the benefit of invoking the 4 elements at once if I want to deal with fire or water for example?
2-If wish to invoke a particular element,go to its quarter and invoke just it and finish(without completing the circle)or invoke it at the 4 quarters? and at banishing...banish the element spirits at its own quarter or have to banish it at the 4 directions?...and if want to invoke 2 elements, what would be the regime?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815500.61/warc/CC-MAIN-20180224073111-20180224093111-00483.warc.gz
|
CC-MAIN-2018-09
| 810
| 4
|
http://www.mp3car.com/newbie/77795-yet-another-googleearth-gpsr-program-print.html
|
code
|
Yet another GoogleEarth GPSR program
Ok, Heres the thing. I wrote a program to plot data from a GPSR into GoogleEarth. The program, CommTest2, will plot a path, altitude, and velocity. NMEA data is also logged incase you want to play back a recorded path.
CommTest2 Link at: http://briefnotion.250free.com/
I threw program together and was surprised at how well the features work together. Completely alpha though and never tested by anyone else. First posting. No instructions. Good luck. Post questions here.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701163729.14/warc/CC-MAIN-20160205193923-00139-ip-10-236-182-209.ec2.internal.warc.gz
|
CC-MAIN-2016-07
| 510
| 4
|
https://www.robertocarroll.com/journal/notes-from-intro-to-d3
|
code
|
Notes from "Intro to D3"
Having built a few things in D3, I’m going back to the basics to learn it from the ground up. First up is Intro to D3 by Square.
I need to learn more about SVG.
Where HTML has the <div> and tags, SVG has the
tag for an arbitrary group. You’ll see a lot in D3 examples. The tag is powerful but complex, it can be used for either lines or arbitrary filled-in shapes depending on the styling.
D3 provides “helpers” for:
Data binding or “the join” is the heart of D3. Create a selection and use .data() to bind data to the selection.
- Add elements with selection.enter()
- Remove elements selection.exit()
- Transition between things with selection.transition()
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100593.71/warc/CC-MAIN-20231206095331-20231206125331-00794.warc.gz
|
CC-MAIN-2023-50
| 695
| 10
|
https://preview.aclanthology.org/ingestion-script-update/people/j/jingwei-yi/
|
code
|
Query-aware webpage snippet extraction is widely used in search engines to help users better understand the content of the returned webpages before clicking. The extracted snippet is expected to summarize the webpage in the context of the input query. Existing snippet extraction methods mainly rely on handcrafted features of overlapping words, which cannot capture deep semantic relationships between the query and webpages. Another idea is to extract the sentences which are most relevant to queries as snippets with existing text matching methods. However, these methods ignore the contextual information of webpages, which may be sub-optimal. In this paper, we propose an effective query-aware webpage snippet extraction method named DeepQSE. In DeepQSE, the concatenation of title, query and each candidate sentence serves as an input of query-aware sentence encoder, aiming to capture the fine-grained relevance between the query and sentences. Then, these query-aware sentence representations are modeled jointly through a document-aware relevance encoder to capture contextual information of the webpage. Since the query and each sentence are jointly modeled in DeepQSE, its online inference may be slow. Thus, we further propose an efficient version of DeepQSE, named Efficient-DeepQSE, which can significantly improve the inference speed of DeepQSE without affecting its performance. The core idea of Efficient-DeepQSE is to decompose the query-aware snippet extraction task into two stages, i.e., a coarse-grained candidate sentence selection stage where sentence representations can be cached, and a fine-grained relevance modeling stage. Experiments on two datasets validate the effectiveness and efficiency of our methods.
News recommendation is a widely adopted technique to provide personalized news feeds for the user. Recently, pre-trained language models (PLMs) have demonstrated the great capability of natural language understanding and benefited news recommendation via improving news modeling. However, most existing works simply finetune the PLM with the news recommendation task, which may suffer from the known domain shift problem between the pre-training corpus and downstream news texts. Moreover, PLMs usually contain a large volume of parameters and have high computational overhead, which imposes a great burden on low-latency online services. In this paper, we propose Tiny-NewsRec, which can improve both the effectiveness and the efficiency of PLM-based news recommendation. We first design a self-supervised domain-specific post-training method to better adapt the general PLM to the news domain with a contrastive matching task between news titles and news bodies. We further propose a two-stage knowledge distillation method to improve the efficiency of the large PLM-based news recommendation model while maintaining its performance. Multiple teacher models originated from different time steps of our post-training procedure are used to transfer comprehensive knowledge to the student model in both its post-training stage and finetuning stage. Extensive experiments on two real-world datasets validate the effectiveness and efficiency of our method.
News recommendation is critical for personalized news access. Most existing news recommendation methods rely on centralized storage of users’ historical news click behavior data, which may lead to privacy concerns and hazards. Federated Learning is a privacy-preserving framework for multiple clients to collaboratively train models without sharing their private data. However, the computation and communication cost of directly learning many existing news recommendation models in a federated way are unacceptable for user clients. In this paper, we propose an efficient federated learning framework for privacy-preserving news recommendation. Instead of training and communicating the whole model, we decompose the news recommendation model into a large news model maintained in the server and a light-weight user model shared on both server and clients, where news representations and user model are communicated between server and clients. More specifically, the clients request the user model and news representations from the server, and send their locally computed gradients to the server for aggregation. The server updates its global user model with the aggregated gradients, and further updates its news model to infer updated news representations. Since the local gradients may contain private information, we propose a secure aggregation method to aggregate gradients in a privacy-preserving way. Experiments on two real-world datasets show that our method can reduce the computation and communication cost on clients while keep promising model performance.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817200.22/warc/CC-MAIN-20240418061950-20240418091950-00423.warc.gz
|
CC-MAIN-2024-18
| 4,763
| 3
|
https://support.onsip.com/hc/en-us/articles/4425676607629-Porting-from-Microsoft-Teams
|
code
|
To successfully port your Microsoft Teams phone number to OnSIP, you will need to manually set up a 10 digit PIN in your Teams Amin Center > https://admin.teams.microsoft.com/
- In the Microsoft Teams portal Navigate to Voice > Phone Numbers
- In the top right hand corner, click Manage porting PIN
- Note: The PIN can NOT include letters or special characters.
After you set a Microsoft Teams PIN, you will need to provide it to the OnSIP porting team to be included in the port request. You can provide us with the PIN when submitting your port request, or when returning your signed LOA (authorization) form.
In place of a traditional bill copy, you can supply OnSIP with screenshots from your Microsoft Teams portal showing the phone numbers to be ported, the name and the address on the account. The number you are porting out can be used as the Microsoft Teams account number and the BTN (Billing telephone number).
Note: We make every attempt to work with all telephone service providers. However, certain local or regional guidelines may preclude your current provider from releasing your number.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473598.4/warc/CC-MAIN-20240221234056-20240222024056-00820.warc.gz
|
CC-MAIN-2024-10
| 1,104
| 7
|
https://github.com/django/django/commit/01dfe35b38b26137165f28b7821c6a6178956bc1
|
code
|
Please sign in to comment.
[1.4.X] Fixed #18090 -- Applied filters when running prefetch_related…
… backwards through a one-to-one relation. Backport of r17888 from trunk. git-svn-id: http://code.djangoproject.com/svn/django/branches/releases/1.4.X@17889 bcc190cf-cafb-0310-a4f2-bffc1f526a37
- Loading branch information...
Showing with 8 additions and 1 deletion.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719784.62/warc/CC-MAIN-20161020183839-00400-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 368
| 5
|
https://axiomatics.com/resources/reference-library/extensible-access-control-markup-language-xacml
|
code
|
eXtensible Access Control Markup Language (XACML)
What is XACML?
The eXtensible Access Control Markup Language (XACML) is a standard developed by leading security experts as part of the Organization for the Advancement of Structured Information Standards (OASIS). It is currently in its third generation.
The eXtensible Access Control Markup Language remains the only standardized way to dynamically enforce authorization by externalizing access controls from applications and databases and using business policies – in what is also referred to as Attribute Based Access Control (ABAC) to govern who can access which data under multiple, fine-grained conditions. At its core, it consists of a standard language, response/request protocol, and reference architecture.
In the XACML 3.0 Oasis Standard, it is stated that; “If implemented throughout an enterprise, a common policy language allows the enterprise to manage the enforcement of all the elements of its security policy in all the components of its information systems. Managing security policy may include some or all of the following steps: writing, reviewing, testing, approving, issuing, combining, analyzing, modifying, withdrawing, retrieving, and enforcing policy.”
The advantages of using XACML
Using XACML offers many advantages to enterprises and large organizations that require a standardized way to securely share assets, while meeting and proving compliance.
Centrally managed system
With one central repository for all XACML policies, XACML standardizes authorization to deliver unrivaled control of assets across the enterprise at every point of access, whether it’s via an API, microservices, app, portal, webservice or database.
Avoid vendor lock-in
Using a standards-based language as opposed to a proprietary system enables more flexibility among developers and avoids vendor lock-in.
Security you can trust
The XACML policy standard has been developed collaboratively and implemented by leading IT security experts at some of the world’s leading companies. It meets the highest security standards.
Simplified policy creation
To simplify policy writing in XACML JSON scripts are used. The lightweight data-interchange format is easy for humans to read and write and easy for machines to parse and generate.
The XACML architecture
The XACML architecture is made up of five key software modules that work in unison to enforced standardized run-time authorization at any and every access request point.
Policy Administration Point (PAP)
The Policy Administration Point is the point of policy authorship. Once a user has written or edited/updated a policy in plain language, the PAP automatically converts it to machine-readable, standards-based XAML code for administration and enforcement by the system.
Policy Information Point (PIP)
The Policy Information Point is a powerful system that calls out to the different attribute directories and third-party services at run-time in order for the Policy Decision Point to establish if the request meets a policy’s specifications. These so-called attribute values including the resource, source, environment, etc.
Policy Retrieval Point (PRP)
The Policy Retrieval Point is the storage point of the XACML access authorization policies. This is most commonly a filesystem or database.
Policy Decision Point (PDP)
The Policy Decision Point evaluates the request, based on what’s written in a policy, and makes a decision – typically Permit or Deny access. The XACML PDP then informs the PEP of the decision.
Policy Enforcement Point (PEP)
The Policy Enforcement Point, both receives the access request and enforces the decision of permit or deny from the XACML PDP in run-time.
The XACML authorization flow
- A user makes an access request which is intercepted by the Policy Enforcement Point (PEP) and converted into XACML.
- The Policy Decision Point (PDP) queries the Policy Information Point (PIP) and the Policy Retrieval Point (PRP) to decipher whether or not the attribute values and policies and aligned.
- The Policy Decision Point (PDP) then takes a decision to permit or deny access and sends the response to the Policy Enforcement Point (PEP).
- The Policy Enforcement Point (PEP) enforces the decision.
XACML policy language structure and syntax
The XACML policy language is made up of a number of key elements that enable fine-grained authorization to be implemented across different deployment models, i.e., cloud, on-premises, and hosted environments. Read more about XACML Policy Language Structure and Syntax.
A rule is a basic component of a policy. As such it delivers the desired effect of the policy – permit or deny. A rule can contain a target, a condition, an advice, or a set of obligations.
A policy consists of one or a set of rules, a rule-confirming algorithm as well as optional obligations and an advice. The policy is the foundation from which the XACML PDP can perform.
A policy set is a group of policies, which can be located in various locations. Policy sets include policies, a policy-combining algorithm, optional obligations and an advice.
A target enables the XACML PDP to verify which policy or rules apply for a certain request. Target statements act as definers for relevant attributes for the rule, policy, or policy set.
Conditions are part of a rule and can compare attribute values, to evaluate if an attribute is “True”, “False” or “Indeterminate”. In the XACML example below, you can see the role of a condition when checking if a subject’s username is the same as a resource’s owner attribute.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817106.73/warc/CC-MAIN-20240416191221-20240416221221-00224.warc.gz
|
CC-MAIN-2024-18
| 5,595
| 39
|
https://ranger.uta.edu/~alex/courses/3318/
|
code
|
Section 001 Lectures Tue,Thu 9:30am-10:50am, online
Section 900 Lectures Tue,Thu 9:30am-10:50am, online
Section 002 Lectures Tue,Thu 11am-12:20pm, online
Instructor: Alexandra Stefan
You can see any TA for help, not only the one assigned to your section.
Mon, Wed 2:30-3pm; Tue, Thu 12:30-1:30pm
or by appointment. Online, using Teams chat.
Textbook: Introduction to Algorithms, by Thomas H. Cormen, Charles E. Leiserson, Ronald E. Rivest, Clifford Stein,3rd edition (CLRS). - NOT required. All the material covered in homework and exams/quizzes will be provided in slides and discussed in lectures. Other reference: Algorithms in C, Parts 1-5, by Robert Sedgewick. 3rd Edition, 2001, Addison-Wesley. ISBN-10: 0201756080. ISBN-13: 978-0201756081. NOTE: this book is usually sold as two volumes, one for parts 1-4, and one for part 5. Most of the class topics are covered in part 1. This is a good text book with interesting code (available online as well) and algorithms. - NOT required
This course teaches students how to design, choose, and evaluate appropriate algorithms when designing and implementing software. Students will learn a broad set of algorithms covering different problems, including sorting, search, spanning trees, and network flow. Students will also learn about basic data structures, such as linked lists, stacks, and queues. The course will also teach students basic methods for analyzing algorithmic properties such as time and space complexity.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00592.warc.gz
|
CC-MAIN-2021-49
| 1,470
| 9
|
https://gitlab.com/postmarketOS/pmaports/-/issues/1115
|
code
|
samsung-i9300: Regarding alleged hardware failure with mainline
This isn't so much an issue as a report and followup on !2229 (merged) with the goal of figuring out how to best handle this situation. It is a long post, but I feel I should add as much context as possible to help figuring out what the best approach is here.
Background: Some time ago I tried out postmarketOS with the mainline kernel on my i9300. After playing some Animatch and chatting with Telegram Desktop the phone started behaving strangely, with the screen flickering and turning off whenever I opened a new app. I turned the phone off in fear that something would happen to it, and following that it would just shut down when I booted into Phosh, and it wouldn't turn on unless I plugged in a power cable, and even then I just got stuck in charging-sdl. I decided to shrug it off as the phone dying from being old.
Today I ended up chatting with Nergzd723 about this and he mentioned he had experienced similar things when using the mainline kernel, so we concluded it could be the mainline kernel causing hardware failure. However, while his phone could boot fine but had a seemingly permanently wrecked panel, mine just wouldn't boot unless I plugged in a charger.
What I didn't think of was testing a different battery.
Following Nergzd723 saying that what I experienced sounded like a dead battery I dug up the original battery and used that instead of the aftermarket one I used in the phone, and lo and behold it started up and ran Glacier as fine as Glacier ever runs.
In other words, it seems my phone had no permanent damage and the panel issues I experienced were temporary, and the rest was caused by a bad battery. However, it is nevertheless strange that this happens just when I install pmOS on it. Previously, it had been used on a daily basis by a relative of mine to make and receive calls, and it worked just fine. It was when I put pmOS on it and started using it like that when the battery seemingly decided to die. There is also the matter of Nergzd723's panel/screen which still seems dead from what I understand.
Where I want to get with this long post is: Can we trust the mainline kernel on this device? While my device seems fine now (I didn't use it for very long though), it is a strange coincidence that the battery (and panel) would start acting up just when I started using pmOS on the device. And, again, Nergzd723's panel seems permanently failed. Could it be that the mainline kernel does something wrong in regards to how it manages power on this device, or were we just unlucky? Could it be that the mainline kernel is unable to charge the device, like samsung-skomer, and thus my issues wouldn't go away even if I let it charge (with mainline) for a while? Thoughts?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100781.60/warc/CC-MAIN-20231209004202-20231209034202-00172.warc.gz
|
CC-MAIN-2023-50
| 2,777
| 8
|
https://www.cs.cmu.edu/~ModProb/KWIC.html
|
code
|
From Parnas [Parnas72] we have a concise definition of the Keyword in Context problem:.
The KWIC index system accepts an ordered set of lines, each line is an ordered set of words, and each word is an ordered set of characters. Any line may be "circularly shifted" by repeatedly removing the first word and appending it at the end of the line. The KWIC index system outputs a listing of all circular shifts of all lines in alphabetical order.
Updated Halloween 95 by
Comments to maintainer
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585199.76/warc/CC-MAIN-20211018062819-20211018092819-00368.warc.gz
|
CC-MAIN-2021-43
| 489
| 4
|
http://loiter.co/i/just-another-day-at-the-waterpark/
|
code
|
just another day at the waterpark
*deep breath* D'AWWWWWWWWWWWWW
Smart honey badger finds ways to escape every enclosure made for him
An art critic with a nose for detail.
comments powered by Disqus.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813622.87/warc/CC-MAIN-20180221123439-20180221143439-00207.warc.gz
|
CC-MAIN-2018-09
| 199
| 5
|
https://moz.com/community/q/how-do-you-find-the-total-search-volume-for-an-industry
|
code
|
Got a burning SEO question?
Logged in Moz Pro users can ask questions, give answers, and enjoy full access our all-in-one SEO toolset
Currently my company is working on trying to find the total search volume (read: search potential) for our industry, but aren't sure how best to go about it. Obviously GWT data and Keyword Planner data came to mind, but those are not all encompassing (at least we don't think they are) -- GWT only has data for terms you rank for and the Keyword Planner only gives you volume if you already know the queries. Is there some quick and easy way to go about finding this that we haven't thought of?
One thing to note is that our business is nationwide, meaning that all our terms will have a geo-identifier associated with them for each location i.e. [city] + search term -- this just makes things even more complicated. Any advice on to approach would be much appreciated!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517559.41/warc/CC-MAIN-20210119011203-20210119041203-00686.warc.gz
|
CC-MAIN-2021-04
| 903
| 4
|
https://outfittube.com/fashionable-curve-clothes-trendy-outfit-ideas-bella-alexandra/
|
code
|
THANKS FOR WATCHING! Please make sure to subscribe for extra concepts.
🤗HIT THE NOTIFICATION BELL TO KNOW WHEN WE POST🛎
#THICK #THICKNESS #MODELS #PLUSSIZE
ARE GOAL IS NOT TO PROMTE NUDITY OR SEXUAL CONTENT.
Thanks for watching! And dont neglect to subscribe for each day uploads!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991258.68/warc/CC-MAIN-20210517150020-20210517180020-00355.warc.gz
|
CC-MAIN-2021-21
| 286
| 5
|
http://vermelho-infame.tumblr.com/
|
code
|
My wife and I just had our African wedding celebration with her side of the family. It was off the charts.
Seeed // Ding
omggg going through my DA storage was so painful i must resist the urge to redraw all the old things. OH ADAM, I MISS DRAWING YOU.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535920694.0/warc/CC-MAIN-20140909054023-00426-ip-10-180-136-8.ec2.internal.warc.gz
|
CC-MAIN-2014-35
| 251
| 3
|
https://community.glideapps.com/t/multiple-pins-in-map-component/24096
|
code
|
I’m sure this may have been asked before but is there a way that I can have a map component show multiple pins? Could this be done with an array column? I’m currently developing an app for a local heritage trail so you like to show a map and an inline list below it with stops / locations? Thanks in advance.
That’s how the map layout works. Just like any of the other list layouts. As long as the content for each stop is contained in separated rows, then you can display them as multiple pins on a map.
In your case, if you already have an inline list of stops, you can simply duplicate that list and change it to the map style layout.
Great thanks Jeff. That was obvious but I haven’t been gliding in a while! Much appreciated.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817073.16/warc/CC-MAIN-20240416062523-20240416092523-00819.warc.gz
|
CC-MAIN-2024-18
| 738
| 4
|
https://www.cambridgeclarion.org/privacy.html
|
code
|
By Stephen Hewitt | Published 28 January 2021 | Last updated 4 March 2021
Everything required to display any page from www.cambridgeclarion.org in a browser comes only from the website itself. There are no cross-site requests in any page and no data-sharing arrangements with third parties. In particular the site does not use Google analytics or any other third-party analytics.
This means all images, CSS files, iframes and fonts can come only from cambridgeclarion.org itself.
This site is not commercial, does not attempt to make money and does not carry any kind of advertisement, and does not use “pixels”, “web beacons” or anything else to track its visitors off-site.
From time to time it attempts to count the number of visitors to each page, or to selected pages, to assess interest in different topics and success in providing information. For the purposes of distinguishing human visitors from robots and crawlers (which are the majority of traffic), the site may log the declared user agent string (UA), referrer, and other http headers from the browser and the time of each page access, without logging the IP address. Note that the web hosting provider may independently log IP addresses as described below in Server logs.
“The cookie you noticed on the site named server_id is used by our load balancing server which is used to improve load times and provide redundancy in times of network/server congestion.”
In 2021, the cookie typically contains the following - shown here in an http header:
Set-Cookie: SERVERID=vhost7-1_www; path=/
As apparent, it does not seem to contain enough information to identify an individual user.
There are comment forms on some of the pages on this website through which visitors can submit a comment. This comment system does not use third-party providers or share data with third-parties. It was custom-made for Clarion and runs entirely self-contained on the web server that hosts the site.
Clarion's comment system does not request an email address or require any personal information when you submit a comment. The submission form has a field for a name, which will be published, but you are free to use a pseudonym or write “anonymous” Everything you write in the comment form will be published on the comment web page and will not be stored anywhere else.
The contents of comment forms which are submitted but not published are not stored. Reasons for non-publication might be rejection because a submitter fails the challenge question or never replies to the challenge. During the challenge someone's comment will be echoed back out to them in the form, but it is not stored on the server until it is published.
The comment submission system may log the user agent string (UA), time and referrer (which ought to be a page on Clarion) and the outcome but not the IP address for each attempt to submit a comment. This is for the purpose of assessing interest, monitoring attempts at abuse by robots and checking that the comment software is working.
Gremple, the German verb conjugator
Gremple currently logs every verb (or supposed verb) that is looked up, along with the user agent string (UA), date (but not time of day), referrer and the result. This is for the purpose of monitoring for attempted abuse by robots, counting how much it is used and improving the software, including detecting missing verbs.
In January 2021 the website has currently been hosted for many years by UK-based hosting provider Iomart/Easyspace on a shared virtual server. I do not have access to the server logs but in response to a query about server logs in 2018, Iomart stated the following by email:
“The log files are held for at least 12 months and log information such as time, date, domain, IP, web browser, these are all stored in our GDPR compliant and secure UK data centres.”
Changes to this policy
The policies and details on this page may be changed in the future. If there is a change to the page then the fact of a change will be noted on the home page under What's new: a log of changes to this website.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100476.94/warc/CC-MAIN-20231202235258-20231203025258-00749.warc.gz
|
CC-MAIN-2023-50
| 4,077
| 19
|
https://forum.inductiveautomation.com/t/problem-reading-float-value-from-modbus-tcp/16621
|
code
|
I’ve a Modbus TCP slave which communicates with Ignition SCADA. I am trying to read float values from holding register 10102. The value is read correctly on the gateway tags. I would like to read the float value in my designer project. So, I created a OPC tag with data type float and mapped it to HR10102. But I am receiving some integer value instead of float. Could you please let me know what could be the problem?
Thank you very much in advance,
Step 1. Read the manual. https://docs.inductiveautomation.com:8443/display/DOC79/Modbus+Addressing
You need to have your address as HRF10102, and set the data type to float.
Thank you!. That solved the issue
I just ran across this thread. I am having the exact same problem as shwethag, except I have used HRF and am still getting a value of zero in Ignition Designer. My exact address is 428893, and so I am using HRF28893. Any help would be appreciated.
Usually the problem is one of:
- your address is wrong, often off by one
- you need to enable the "Swap Words" setting
- the value is actually 0
If you can use some 3rd party Modbus software like modscan or modpoll or whatever to see the "right" value then we can see which of those problems it is.
Thank you Kevin--I discovered it was #2. as you suggested! Works fine now.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499911.86/warc/CC-MAIN-20230201045500-20230201075500-00734.warc.gz
|
CC-MAIN-2023-06
| 1,282
| 12
|
http://www.dzone.com/links/jboss_eds_platform_are_you_trying_to_connect_to_s.html
|
code
|
This is the process I use to create a new Ruby on Rails project. I usually use the latest... more »
How to : Running Docker on Ubuntu for Power Servers.
I trust anyone reading this post title most probably is expecting to see something like UML... more »
Im using google_aouth2 and facebook with devise. Google aouth doesn't seems to play well with... more »
I want to share some insights from our recent project at techdev, trackr. trackr is a web... more »
In this post we have shared 17+ Best Free Bootstrap Admin Templates. You can download these... more »
Discover best practices and the most useful tools for building the ideal integration architecture.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379916.51/warc/CC-MAIN-20141119123259-00087-ip-10-235-23-156.ec2.internal.warc.gz
|
CC-MAIN-2014-49
| 664
| 7
|
https://thedesignwall.org/bitlocker-windows-security-microsoft-docs/
|
code
|
BitLocker – Windows security | Microsoft Docs
Bitlocker for windows 10 free
You can use this tool to help recover data that is stored on a drive that has been encrypted by using BitLocker.
Download BitLocker for Windows Completely Free – SoftRAR
Upgrade to Microsoft Edge forgot windows password and pin free download take advantage of the latest features, security updates, and technical support. This topic provides a high-level overview of BitLocker, including a list of system requirements, practical applications, and deprecated features. BitLocker Drive Encryption is a data protection feature that integrates with the operating system and addresses the threats of data theft or exposure from lost, stolen, or inappropriately decommissioned bitloker.
The TPM is a hardware component installed in many newer computers by the computer manufacturers. It works with BitLocker to help protect user data and to ensure that a computer bitlocker for windows 10 free not been tampered with while the system was offline. On computers that do not have a Bitlocker for windows 10 free version 1. However, this implementation will require the user to insert a USB startup key to start the computer or http://replace.me/5704.txt from hibernation.
Starting with Windows 8, you can use an operating system сайт avid sibelius 8.2 free это password to protect the operating system volume on a computer without TPM. Both options do not provide the pre-startup system integrity verification offered by BitLocker with a TPM. In addition to the TPM, BitLocker offers the option to lock the normal startup process until the user supplies a personal identification number PIN or inserts a bitlocker for windows 10 free device, such as a USB flash bitlocker for windows 10 free, that contains a startup key.
These additional security measures provide multifactor authentication and assurance that the computer will not start or resume from hibernation until the correct PIN or startup key is presented. Data on a lost or stolen computer is vulnerable to unauthorized access, either by running a software-attack tool against it or by transferring the computer’s hard disk to a different computer. BitLocker helps mitigate unauthorized data access by enhancing file and system protections.
BitLocker also helps render data inaccessible when BitLocker-protected computers are decommissioned or bihlocker. BitLocker Bitlocker for windows 10 free Password Viewer.
You can use this tool bbitlocker help recover data that is stored on a drive that has been encrypted by using BitLocker. By using bitloccker tool, you can examine a computer object’s Properties dialog box to bitlpcker the corresponding BitLocker recovery passwords. Additionally, you can right-click a domain container and then search for a BitLocker recovery password across all the domains in the Active Directory forest. To view windws passwords, you must be a domain administrator, or you must have been delegated permissions by a domain administrator.
BitLocker Drive Encryption Tools. Both manage-bde and the BitLocker cmdlets can be used to perform any task that can be accomplished through the BitLocker control panel, and they are appropriate to use for automated deployments and other scripting scenarios. Repair-bde is provided for disaster recovery scenarios in which a BitLocker protected drive cannot bitllocker unlocked normally or by using the recovery console.
TPM 2. Devices with TPM 2. For added security Enable the Secure Boot feature. A partition subject to encryption cannot be marked as адрес страницы active partition this applies to the operating system, fixed data, and removable data drives. When installed on a new computer, Windows will automatically create the partitions that are required for BitLocker. When installing the BitLocker optional component on a server you will bitlocker for windows 10 free need to install the Enhanced Storage feature, which is used to support microsoft publisher 2013 product key free free download encrypted drives.
Skip to main content. This browser is no longer supported. Download Microsoft Edge More info. Table of contents Exit focus mode.
Table of contents. Note TPM 2. Submit and view feedback for This product This page. View all page feedback. In this article. This topic for the IT professional provides an overview of the ways that BitLocker Device Encryption can help protect data on devices running Windows.
BitLocker frequently asked questions FAQ. This topic for the IT professional читать больше frequently asked questions concerning the requirements to use, upgrade, deploy and administer, and key management policies for BitLocker.
Prepare your organization for BitLocker: Planning and policies. BitLocker basic deployment. This topic for the IT professional explains how BitLocker features can be used to protect your data through drive encryption. BitLocker: How to deploy on Windows Server. BitLocker: How to enable Network Unlock. BitLocker Group Policy settings. This topic for IT professionals describes the hitlocker, location, and effect of each Bitlocker for windows 10 free Policy setting that is used to manage BitLocker.
BCD settings and BitLocker. BitLocker Recovery Guide. Protect BitLocker from pre-boot attacks. This detailed guide will help you understand the circumstances under which the use of pre-boot authentication is recommended for devices running Windows 11, Windows 10, Windows 8.
This guide describes the resources that can help you troubleshoot BitLocker issues, and provides solutions for several common BitLocker issues. Protecting cluster shared volumes and storage area networks with Больше информации.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710192.90/warc/CC-MAIN-20221127041342-20221127071342-00042.warc.gz
|
CC-MAIN-2022-49
| 5,713
| 18
|
http://fonsg3.hum.uva.nl/praat/manual/Configuration__To_Configuration__varimax____.html
|
code
|
A command that rotates the selected Configuration object to a new Configuration object whose coordinates have maximum squared variance.
The iteration process stops when either the maximum number of iterations is reached or the tolerance criterion is met, which ever one is first.
The Varimax rotation procedure was first proposed by Kaiser (1958). Given a numberOfPoints × numberOfDimensions configuration A, the procedure tries to find an orthonormal rotation matrix T such that the sum of variances of the columns of B*B is a maximum, where B = AT and * is the element wise (Hadamard) product of matrices. A direct solution for the optimal T is not available, except for the case when numberOfDimensions equals two. Kaiser suggested an iterative algorithm based on planar rotations, i.e., alternate rotations of all pairs of columns of A.
However, this procedure is not without problems: the varimax function may have stationary points that are not even local maxima. We have incorporated an algorithm of Ten Berge (1995) that prevents this unpleasant situation from happening.
© djmw, April 7, 2004
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512121.15/warc/CC-MAIN-20171211033436-20171211053436-00005.warc.gz
|
CC-MAIN-2017-51
| 1,103
| 5
|
https://www.crvownersclub.com/threads/1999-crv-blower-power-question.200668/
|
code
|
The blue wire with the black stripe which should have power doesn't. Where does it get its power from? I checked all related fused under dash and in hood but all looked fine. Really confused??? I know we have a bad blower resistor too but with out power first.... Any help would be amazing!!
Thank you in advance!!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500017.27/warc/CC-MAIN-20230202101933-20230202131933-00712.warc.gz
|
CC-MAIN-2023-06
| 314
| 2
|
https://aitopics.org/search?filters=taxnodes%3ATechnology%7CInformation+Technology%7CData+Science%40%40taxnodes%3ATechnology%7CInformation+Technology%7CArtificial+Intelligence%40%40modified%3A%5B1000-01-01T00%3A00%3A01.000Z+TO+NOW%5D%40%40modified%3A%5B*+TO+NOW%5D
|
code
|
Apache Spark is the de-facto standard for large scale data processing. This is the first course of a series of courses towards the IBM Advanced Data Science Specialization. We strongly believe that is is crucial for success to start learning a scalable data science platform since memory and CPU constraints are to most limiting factors when it comes to building advanced machine learning models. In this course we teach you the fundamentals of Apache Spark using python and pyspark. We'll introduce Apache Spark in the first two weeks and learn how to apply it to compute basic exploratory and data pre-processing tasks in the last two weeks.
Climate change is here, and it's set to get much worse, experts say – and as a result, many industries have pledged to reduce their carbon footprints in the coming decades. Now, the recent jump in energy prices due mainly to the war in Ukraine, also emphasizes the need for development of cheap, renewable forms of energy from freely available sources, like the sun and wind – as opposed to reliance on fossil fuels controlled by nation-states. But going green is easier for some industries than for others,- and one area where it is likely to be a significant challenge is in data centers, which require huge amounts of electricity to cool off, in some cases, the millions of computers deployed. Growing consumer demand to reduce carbon output, along with rules that regulators are likely to impose in the near future, require companies that run data centers to take immediate steps to go green. And artificial intelligence, machine learning, neural networks, and other related technologies can help enterprises of all kinds achieve that goal, without having to spend huge sums to accomplish it.
Since 2002, Quantium have combined the best of human and artificial intelligence to power possibilities for individuals, organisations and society. Whether it be building forecasting engines that are driving down food wastage or creating mapping tools to support targeted measures in combatting human trafficking, Quantium believes in better goods, services, experiences, and championing the benefits of data for a brighter future. Q-Telco is the new joint venture between Quantium and Telstra to unlock the full potential of data and AI for Telstra and its customers. We'll do this by combining our market leading data science and AI capabilities with Telstra's customer, product and network data assets. This new partnership will not only provide personalised and data-enabled products and offers for Telstra's customers, but it will also embed proactive and predictive AI and machine learning across Telstra's core business.
Incorporating ethics and legal compliance into data-driven algorithmic systems has been attracting significant attention from the computing research community, most notably under the umbrella of fair8 and interpretable16 machine learning. While important, much of this work has been limited in scope to the "last mile" of data analysis and has disregarded both the system's design, development, and use life cycle (What are we automating and why? Is the system working as intended? Are there any unforeseen consequences post-deployment?) and the data life cycle (Where did the data come from? How long is it valid and appropriate?). In this article, we argue two points. First, the decisions we make during data collection and preparation profoundly impact the robustness, fairness, and interpretability of the systems we build. Second, our responsibility for the operation of these systems does not stop when they are deployed. To make our discussion concrete, consider the use of predictive analytics in hiring. Automated hiring systems are seeing ever broader use and are as varied as the hiring practices themselves, ranging from resume screeners that claim to identify promising applicantsa to video and voice analysis tools that facilitate the interview processb and game-based assessments that promise to surface personality traits indicative of future success.c Bogen and Rieke5 describe the hiring process from the employer's point of view as a series of decisions that forms a funnel, with stages corresponding to sourcing, screening, interviewing, and selection. The hiring funnel is an example of an automated decision system--a data-driven, algorithm-assisted process that culminates in job offers to some candidates and rejections to others. The popularity of automated hiring systems is due in no small part to our collective quest for efficiency.
Today, We'll look after something very big that you might have never seen or rarely seen on the web. We have researched for more than 35 days to find out all the cheatsheets on machine learning, deep learning, data mining, neural networks, big data, artificial intelligence, python, tensorflow, scikit-learn, etc from all over the web. To make it easy for all learners, We have zipped over 100 machine learning cheat sheet, data science cheat sheet, artificial intelligence cheat sheets and more in one article. You can also download the pdf version of this cheat sheets (links are already provided below every image). Note: The list is long.
Technology is not showing signs of slowing down any time soon. As we move into cloud computing, big data, natural language processing and artificial intelligence, the employment sector is gearing up for a big boost in the number of opportunities. Organisations such as Google, Microsoft, Facebook and Apple are aggressively hiring people with expertise in these domains, which makes them highly lucrative. Artificial intelligence is particularly on the cusp of a breakthrough. Technologies such as machine learning, neural networks, genetic algorithms and deep learning are receiving a lot of spotlight.
Projects have always been thought of as measurable improvements resulting from a result produced, which serve as the icing on the cake for achieving personal or corporate goals. Talking about individual projects, have you found it challenging to learn at home? Many of us are in the same boat -- there are far too many things to handle during these trying times, and learning has taken a back seat, contrary to our expectations. So, what are our options for getting back on track? How can we apply what we have learned about data science in the real world? Picking an open-source data science project and sticking with it is extremely beneficial.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662540268.46/warc/CC-MAIN-20220521174536-20220521204536-00449.warc.gz
|
CC-MAIN-2022-21
| 6,429
| 7
|
https://stagsoftware.com/s7/dangerous-decisions-assumption-traps/
|
code
|
As good engineers, we use metrics to make decisions on quality, testing. My view is that measurements have inherent assumptions that we have to be cognizant to, otherwise decisions can be dangerous. For example, we use defect arrival rate to make decisions on quality, with the inherent assumption that test cases are relevant and complete. We use the metric of coverage to make decisions on adequacy of test cases, with the assumption that all system behaviors have indeed been coded.
I feel that measurements like test case immunity, test case growth, and quality growth could be possibly interesting indicators. These ensure that that we stay focused on the goal of effective testing. Find these thoughts outlined in this short presentation.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400190270.10/warc/CC-MAIN-20200919044311-20200919074311-00517.warc.gz
|
CC-MAIN-2020-40
| 744
| 2
|
https://www.hackingloops.com/hacking-class-9-ip-spoofing-and-its-use/
|
code
|
As we have covered almost all topics of scanning this is the last topic that come under scanning….
First of all ..
WHAT IS IP SPOOFING??
Ip spoofing is basically encrypting your Ip address so that it appears something else to attacker or victim i.e it is the virtual Ip address..
~ IP Spoofing is when an attacker changes his IP address so that he appears to be someone else.
~ When the victim replies back to the address, it goes back to the spoofed address and not to the attacker’s real address.
~ You will not be able to complete the three-way handshake and open a successful TCP connection by spoofing an IP address.
You Will Better Understand It With SNAPSHOT..
HOW TO DETECT IP SPOOFING ??
When an attacker is spoofing packets, he is usually at a different location than the address being spoofed
Attacker’s TTL(Time to Live i.e Time for which IP is allocated for use) will be different from the spoofed address’ real TTL. If you check the received packet’s TTL with spoofed one, you will see TTL doesn’t match.
These things are blocked in latest versions of Windows i.e after SP3. Firewall will itself block any spoofing attacks…
This Is all about the IP spoofing and Scanning Part.
The Next Two Parts of upcoming class:
1. How to Protect Yourself From Scanning .
2. How to Hack Websites Using things that We Studied until Now . A little SQL injection tutorial is also required for that. We will try to cover it as quick as Possible..
If you have any doubts about Ip spoofing you can ask..
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511023.76/warc/CC-MAIN-20231002232712-20231003022712-00353.warc.gz
|
CC-MAIN-2023-40
| 1,511
| 17
|
https://issues.openmrs.org/browse/FORM-108
|
code
|
We are testing OpenMRS 1.7. After rebuilding all the xsns, the schema builder seems not to work properly. The order of the answers for certain questions are different on the form. So, schema validation errors will be displayed when trying to open the infopath forms under Form Entry tab.
Jeremy has revised the code to .getSortedAnswers() instead of .getAnswers() which the forms are now opened with no error.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038863420.65/warc/CC-MAIN-20210419015157-20210419045157-00444.warc.gz
|
CC-MAIN-2021-17
| 409
| 2
|
https://www.kde.org/announcements/kde-frameworks-5.2.0.php
|
code
|
Release of KDE Frameworks 5.2.0
September 12, 2014. KDE today announces the release of KDE Frameworks 5.2.0.
KDE Frameworks are 60 addon libraries to Qt which provide a wide variety of commonly needed functionality in mature, peer reviewed and well tested libraries with friendly licensing terms. For an introduction see the Frameworks 5.0 release announcement.
New in this Version
- reimplementation of the file item plugin for linking files to activities
- fix handling of uncompressed files
- fix missing default shortcuts for standard actions, leading to many runtime warnings
- better support for QGroupBox in KConfigDialogManager
- Mark KAboutData::setProgramIconName() as deprecated, it did not do anything. Use QApplication::setWindowIcon(QIcon::fromTheme("...")) instead.
- new classes Kdelibs4ConfigMigrator and KPluginMetaData
- added org.kde.kio component.
- disable the DDS and JPEG-2000 plugins when Qt version is 5.3 or later
- now follows the mime-apps spec, for better interoperability with gio when it comes to the user's preferred and default apps.
- new classes EmptyTrashJob and RestoreJob.
- new functions isClipboardDataCut and setClipboardDataCut.
- installing "stuff" works again (porting bug)
- new class KColumnResizer (makes it easy to vertically align widgets across groups)
- New method KWindowSystem::setOnActivities
- KActionCollection::setDefaultShortcuts now makes the shortcut active too, to simplify application code.
- The maximum worker count will now decrease if a lower value is set after workers have been created. Previously, workers would remain active once they have been created.
- Examples from the previous ThreadWeaverDemos Github repository are being merged into the KF5 ThreadWeaver repo.
- The maximum worker count can now be set to zero (the previous minimum was 1). Doing so will effectively halt processing in the queue.
- Documentation of various aspects of ThreadWeaver use is becoming part of the KDE Frameworks Cookbook. Parts of it is located in the examples/ directory.
- Support for relative libexec dir.
- the file dialog now remembers its size correctly, and works better with remote URLs.
On Linux, using packages for your favorite distribution is the recommended way to get access to KDE Frameworks.use kdesrc-build.
Frameworks 5.2.0 requires Qt 5.2. It is part of a series of planned monthly releases making improvements available to developers in a quick and predictable manner.
Those interested in following and contributing to the development of Frameworks can check out the git repositories, follow the discussions on the KDE Frameworks Development mailing list and contribute patches through review board. Policies and the current state of the project and plans are available at the Frameworks wiki. Real-time discussions take place on the #kde-devel IRC channel on freenode.net.
You can discuss and share ideas on this release in the comments section of the dot article.
KDE is a Free Software community that exists and grows only because of the help of many volunteers that donate their time and effort. KDE is always looking for new volunteers and contributions, whether it is help with coding, bug fixing or reporting, writing documentation, translations, promotion, money, etc. All contributions are gratefully appreciated and eagerly accepted. Please read through the Donations page for further information or become a KDE e.V. supporting member through our new Join the Game initiative.
KDE is an international technology team that creates free and open source software for desktop and portable computing. Among KDE's products are a modern desktop system for Linux and UNIX platforms, comprehensive office productivity and groupware suites and hundreds of software titles in many categories including Internet and web applications, multimedia, entertainment, educational, graphics and software development. KDE software is translated into more than 60 languages and is built with ease of use and modern accessibility principles in mind. KDE's full-featured applications run natively on Linux, BSD, Solaris, Windows and Mac OS X.
Trademark Notices. KDE® and the K Desktop Environment® logo are registered trademarks of KDE e.V. Linux is a registered trademark of Linus Torvalds. UNIX is a registered trademark of The Open Group in the United States and other countries. All other trademarks and copyrights referred to in this announcement are the property of their respective owners.
Unit 7B Beauvallon Village
13 Sandown Road
Phone: +27 83 455 9978
A-4 Sonal Coop. Hsg. Society
6543 ZE Nijmegen
21 Kinross Rd. #2
Brighton, MA 02135
Phone: +1 (762) 233-4KDE (4533)
11 Eucalyptus Road
Eltham VIC 3095
Phone: (+61)402 346684
Sandro Santos Andrade
R. da Gratidão 232, apto 1106-A
Salvador, BA 41650-195
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125881.93/warc/CC-MAIN-20170423031205-00623-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 4,779
| 46
|
https://www.ordinatechnic.com/web-cloud-server/1/Software%20Development/2/Version%20Control/3/Git/adding-a-git-client-node-in-an-individual-developer-centralized-workflow
|
code
|
Git (or another SCM) is essential in collaborative software development. It is also very helpful for individual developers not only for managing software configuration, maintaining a remote backup, maintaining a separate authoritative software configuration on a remote computer, but for allowing the individual developer to work on a project on multiple computers and have the current state of the repository available on all of the developer's computers.
This article describes the process for duplicating a repository on an additional computer when an instance of a repository already exists on one local computer and on one remote computer. The resulting collection of computers -- two of the developer's workstations and a central server, possibly using a service such as GitLab -- then constitutes the elements of centralized, distributed Git workflow with two client nodes and one central server.
Git SCM is an indispensible tool for developers collaborating on a software project using the various workflows described in Section 5.1 Distributed Git - Distributed Workflows of the Pro Git book. It is also useful to individual developers for managing software configuration just as it is for a collaborating group. Besides managing software configuration, individual developers can use it to maintain a remote backup, maintain a separate authoritative software configuration on a remote computer, and to allow the individual developer to work on a project on multiple computers and have the current state of the project repository available on all of the developer's computers by employing the centralized distributed workflow.
The following image illustrates two use cases for individual developers. The left pane depicts the simple backup scenario in which the developer works on a project and tracks changes on a repository on their workstation, labeled "original client node", and periodically pushes to a remote server, labeled "central server", in which the project repository has been duplicated. The project could have been created in a directory on the client node, the git repository initialized there and, then duplicated on the remote server.
The right pane depicts a git centralized distributed workflow, the simplest of the possible distributed git workflows described in Section 5.1 Distributed Git - Distributed Workflows of the Pro Git book. In this scenario multiple developers could work in the versions of the same repository on their local workstations, labeled "client nodes" and then push to a version of the repository acting as the authoritative version of the project repository on a central server, also referred to as a hub in Pro Git. With this workflow paradigm, reconciling changes made to the project by different developers can be problematic if there are conflicting changes to the project.
However, for an individual developer, this workflow is ideal for allowing the developer to work on any workstation and always have the current state of the repository available when they move to a different workstation. This only requires pushing to the central server at the end of a session on one workstation and pulling from the central server when beginning a session on another workstation. For an individual developer, it is also possible to achieve this without the central server, i.e., to always have the current state of the repository available when beginning a session on any node by pushing directly to all other nodes when ending a session on a node, but it is easier -- especially if there are many client nodes -- to push to the central server when finishing a work session on one client node and pulling from the central server when beginning a session on another client node.
In this article we discuss the process for setting up an additional client node to realize the second workflow paradigm, after the first has already been established. The essential part of the process is simply cloning the repository from the central server after ensuring that the instances of the repository on the client node and the central server are in the same current state. After the second client node has been established, the same process could be used to add additional client nodes to the workflow configuration.
Before cloning the central repository on the new client node it is best to ensure that the central repository is in the same state as the repository on the original client node, i.e., all of the latest changes made on the client node are incorporated in the central repository, otherwise when the central hub is cloned on the new client node, it will not reflect the current state of the project. On the original client node:
100% 15:14:03 USER: brook HOST: ARCH-16ITH6 on exp [!] example_repository ❯$ git add -u
100% 15:14:10 USER: brook HOST: ARCH-16ITH6 on exp [+] example_repository ❯$ git commit -m "Incremental commit. made changes to 'adding-a-client-node-in-a-git-centralized-workflow-source-configuration-manegement-system.html'." [exp d598b18] Incremental commit. made changes to 'adding-a-client-node-in-a-git-centralized-workflow-source-configuration-manegement-system.html'. 1 file changed, 48 insertions(+), 6 deletions(-)
100% 15:16:52 USER: brook HOST: ARCH-16ITH6 on exp PCD: 2s example_repository ❯$ git fetch . exp:master From . 88bdde3..d598b18 exp -> master
100% 14:04:36 USER: brook HOST: ARCH-16ITH6 on exp example_repository ❯$ git status On branch exp Your branch is up to date with 'origin/exp'. nothing to commit, working tree clean 100% 14:05:40 USER: brook HOST: ARCH-16ITH6 on expIf changes have been incorporated into other branches, they can also be verified, by first switching to the branch and using the same command:
100% 14:04:45 USER: brook HOST: ARCH-16ITH6 on exp example_repository ❯$ git checkout master Switched to branch 'master' 100% 14:16:02 USER: brook HOST: ARCH-16ITH6 on master example_repository ❯$ git status On branch master nothing to commit, working tree clean 100% 14:16:06 USER: brook HOST: ARCH-16ITH6 on master example_repository ❯$
100% 15:16:34 USER: brook HOST: ARCH-16ITH6 on exp [⇡] example_repository ❯$ git push origin exp Enumerating objects: 11, done. Counting objects: 100% (11/11), done. Delta compression using up to 16 threads Compressing objects: 100% (6/6), done. Writing objects: 100% (6/6), 2.37 KiB | 2.37 MiB/s, done. Total 6 (delta 5), reused 0 (delta 0), pack-reused 0 remote: remote: To create a merge request for exp, visit: remote: https://gitlab.com/gitlab-individual-username/example_repository/-/merge_requests/new?merge_request%5Bsource_branch%5D=exp remote: To gitlab.com:gitlab-individual-username/example_repository.git 88bdde3..d598b18 exp -> exp 100% 15:16:52 USER: brook HOST: ARCH-16ITH6 on exp PCD: 2s example_repository ❯$ git fetch . exp:master From . 88bdde3..d598b18 exp -> master 100% 15:16:58 USER: brook HOST: ARCH-16ITH6 on exp example_repository ❯$ git push origin master Total 0 (delta 0), reused 0 (delta 0), pack-reused 0 To gitlab.com:gitlab-individual-username/example_repository.git 88bdde3..d598b18 master -> master
Once the previous steps have been performed, the state of the repository on the existing client node and the server will be identical. We can now clone the repository from the central server to the new client by executing a
git clone command. The command can take many options and up to two arguments, but at a minimum, it requires one argument that identifies the remote server and the remote repository, using either an https or ssh protocol. If the central server is a GitLab server, the two possible arguments are displayed when clicking the "Clone" button on the project's main GitLab web page, as shown in the following image.
When the command is executed, if an argument that specifies a directory is not supplied, it will duplicate the remote repository in a new directory which it creates in the current directory, so the command should be executed from the directory that is to contain the new directory. The clone will include all branches of the repository, but when switching to the newly created directory, the default branch of the project will be checked out and its latest state will be activated as the working tree. The details of the command and the many variations in how it can perform the clone operation is viewable with
git help clone.
If the hub is a GitLab server, the default branch should have been set previously. The current default branch setting can be viewed and modified in the web interface. The following image shows the project's settings page in the GitLab interface with default branch selection dropdown activated.
We will use the ssh protocol based repository identifier -- recommended in GitLab documentation -- as an argument to the clone command as in:
git clone firstname.lastname@example.org:user-name/repository-name.git
The following listing shows the command and its output.
100% 18:42:28 USER: brook HOST: G5-RHEL9 ~/G5_DataEXT4/project_parent_directory ❯$ git clone email@example.com:gitlab-individual-username/example_repository.git Cloning into 'example_repository'... remote: Enumerating objects: 779, done. remote: Counting objects: 100% (177/177), done. remote: Compressing objects: 100% (177/177), done. remote: Total 779 (delta 107), reused 0 (delta 0), pack-reused 602 Receiving objects: 100% (779/779), 1.39 MiB | 13.34 MiB/s, done. Resolving deltas: 100% (182/182), done.
To work on the repository we would need to
cd to the newly created directory.
100% 18:42:49 USER: brook HOST: G5-RHEL9 ~/G5_DataEXT4/project_parent_directory ❯$ cd example_repository/
The following image shows a terminal in which the first command executed is the
git clone command. After changing to the new directory created by the clone command, the Starship command prompt indicates that we are in a Git repository with the "master" branch checked out as the current branch; this is the default branch as set in GitLab. The next command shows some properties of the repository, and the following one changes the current branch to one named "exp".
At this point the repository on the new client node will be identical to those on the other client and the central server.
The local repository on the new client node can be viewed with the git config --list command, shown with its output in the newly cloned repository in the following listing. One notable omission from the output is user configuration, namely any user configuration. The reason for this and necessary actions are discussed below in the section Other Issues -> Local Git User Configuration.
100% 18:42:57 USER: brook HOST: G5-RHEL9 on master example_repository ❯$ git config --list core.repositoryformatversion=0 core.filemode=true core.bare=false core.logallrefupdates=true firstname.lastname@example.org:gitlab-individual-username/example_repository.git remote.origin.fetch=+refs/heads/*:refs/remotes/origin/* branch.master.remote=origin branch.master.merge=refs/heads/master
In the previous image we saw that after switching to the new repository directory created by the clone command, the active brach is set to the master branch, the branch set as the default in the cloned repository, in this case the repository on the GitLab server. If the branch used to make new additions and modifications in the project is different from the default branch, as is the case in our example which uses an branch named exp instead of the default master, then that would need to be checked out after cloning to the new client node. This was also shown in the previous image.
100% 18:43:07 USER: brook HOST: G5-RHEL9 on master example_repository ❯$ git checkout exp Branch 'exp' set up to track remote branch 'exp' from 'origin'. Switched to a new branch 'exp'
The output of the
git log command, shown in the terminal window in the following image, indicates that the HEAD of each branch in the new client node repository and of each branch in the repository on the central server point to the same -- latest -- commit. The output also reveals one possibly unexpected item, that is a reference to a branch named origin/HEAD appears in the output on the new client node that does not appear when running
git log in the original client node. This is the result of the clone operation and and represents the default branch, mentioned earlier, in the cloned repository. (See this stackoverflow discussion).
In the output of git config --list shown above, there was no user configuration. Typically, Git users set a global user configuration along with other global configuration items, which would be shown in the output. These items are not visible in the example output, however, because in my workflows, Git repositories use different identity settings, so I don't bother with global Git user configuration and only set local Git user configuration specific to each repository. In this case, before working in this repository local Git user configuration must be set, otherwise when attempting to make the first commit an error would occur with a message indicating the "Author identity" is unknown, as shown in the output of the commit command in the following image:
In order to commit on the client node using a local Git user identity specific to the current repository, the configuration items user.email and user.name would need to be set using the command git config with the --local option as in:
brook … example_repository web-cloud-server git git config --local user.email "email@example.com"
brook … example_repository web-cloud-server git git config --local user.name "Real Name"
After setting these items, they are included in the configuration list output, as shown below.
brook … example_repository web-cloud-server git git config --list core.repositoryformatversion=0 core.filemode=true core.bare=false core.logallrefupdates=true firstname.lastname@example.org:gitlab-individual-username/example_repository.git remote.origin.fetch=+refs/heads/*:refs/remotes/origin/* branch.master.remote=origin branch.master.merge=refs/heads/master email@example.com user.name=Real Name
Once the repository is cloned on the new client node -- and the local repository specific user configuration is set, if desired or necessary for the project -- the distributed workflow is ready to be used. When beginning a session for the first time on either client node, since all instances of the repository -- on all clients and the central server are identical, the developer can make changes to the project immediately. But at the end of the first session, the project must be pushed to the central server in order to be available to be pulled from the central server at the beginning of the next session on other client. For example with:
git push origin exp
The following image of a terminal window shows the series of commands at the end of the session -- viewing the status of the repository after staging changes in tracked files (first command), committing (second command), pushing to the repository on the central server (third command), synchronizing the changes to the other branch (fourth command), and pushing the other branch to the central server (fifth command).
At the beginning of each subsequent session (if it is on a different client node than the previous session) the project must be pulled to the current client node, for example with:
git pull origin exp
And at the end of each subsequent session on any client, the project must be pushed to the central server, again in order to be available to be pulled from the central server at the beginning of the next session on another client. The following set of images show the beginning of a session on a different client node from the previous session.
git logreflecting the initial state of the repository; it is as it was the last time this client node was used and does not show recent commits on the other client node, which have also been pushed to the central server. The right pane shows pulling the project from the central server repository with the
git pullcommand -- something that needs to be done at the beginning of a session in the centralized distributed workflow for an individual developer with multiple machines.
This step is strictly not necessary as changes can be synchronized among all clients and the central server at some future time. If there are no conflicting differences between the versions of the repository on the various computers, this is not difficult. However if there are conflicting differences between the various versions of the repositories on the various nodes and central server, the differences must be reconciled before all nodes and the central server can be synchronized.↩
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00064.warc.gz
|
CC-MAIN-2023-14
| 16,697
| 51
|
https://www-03.ibm.com/support/techdocs/atsmastr.nsf/fe582a1e48331b5585256de50062ae1c/dbb44293d686bdc0862572e3005f139b?OpenDocument
|
code
|
Solution Developer Marketing
iSeries; J.D. Edwards
|Abstract: With the introduction of the System i models 515 and 525 the IBM Oracle International Competency Center completed tests to evaluate the performance of Oracle’s JD Edwards EnterpriseOne 8.12 on a one-way system. The tests utilized a more robust test kit newly available from Oracle called Day-In-Life (DIL) which includes a more complete set of web user applications, batch, and a larger database than Oracle’s previous 17-script test kit.|
Hardware; Software; Solutions
J D Edwards
IBM System i Family
IBM, Oracle, JD Edwards, J.D. Edwards, JDE, EnterpriseOne, IBM i, iSeries, Benchmark, 525, 515
|Is this your first visit to Techdocs (the Technical Sales Library)?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937440.13/warc/CC-MAIN-20180420100911-20180420120911-00448.warc.gz
|
CC-MAIN-2018-17
| 731
| 8
|
https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg23920.html
|
code
|
The Broadcom SBA RAID is a stream-based device which provides RAID5/6 offload.
It requires a SoC specific ring manager (such as Broadcom FlexRM ring manager) to provide ring-based programming interface. Due to this, the Broadcom SBA RAID driver (mailbox client) implements DMA device having one DMA channel using a set of mailbox channels provided by Broadcom SoC specific ring manager driver (mailbox controller). The Broadcom SBA RAID hardware requires PQ disk position instead of PQ disk coefficient. To address this, we have added raid_gflog table which will help driver to convert PQ disk coefficient to PQ disk position. This patchset is based on Linux-4.11-rc1 and depends on patchset "[PATCH v5 0/2] Broadcom FlexRM ring manager support" It is also available at sba-raid-v6 branch of https://github.com/Broadcom/arm64-linux.git Changes since v5: - Rebased patches for Linux-4.11-rc1 Changes since v4: - Removed dependency of bcm-sba-raid driver on kconfig opton ASYNC_TX_ENABLE_CHANNEL_SWITCH - Select kconfig options ASYNC_TX_DISABLE_XOR_VAL_DMA and ASYNC_TX_DISABLE_PQ_VAL_DMA for bcm-sba-raid driver - Implemented device_prep_dma_interrupt() using dummy 8-byte copy operation so that the dma_async_device_register() can set DMA_ASYNC_TX capability for the DMA device provided by bcm-sba-raid driver Changes since v3: - Replaced SBA_ENC() with sba_cmd_enc() inline function - Use list_first_entry_or_null() wherever possible - Remove unwanted brances around loops wherever possible - Use lockdep_assert_held() where required Changes since v2: - Droped patch to handle DMA devices having support for fewer PQ coefficients in Linux Async Tx - Added work-around in bcm-sba-raid driver to handle unsupported PQ coefficients using multiple SBA requests Changes since v1: - Droped patch to add mbox_channel_device() API - Used GENMASK and BIT macros wherever possible in bcm-sba-raid driver - Replaced C_MDATA macros with static inline functions in bcm-sba-raid driver - Removed sba_alloc_chan_resources() callback in bcm-sba-raid driver - Used dev_err() instead of dev_info() wherever applicable - Removed call to sba_issue_pending() from sba_tx_submit() in bcm-sba-raid driver - Implemented SBA request chaning for handling (len > sba->req_size) in bcm-sba-raid driver - Implemented device_terminate_all() callback in bcm-sba-raid driver Anup Patel (4): lib/raid6: Add log-of-2 table for RAID6 HW requiring disk position async_tx: Fix DMA_PREP_FENCE usage in do_async_gen_syndrome() dmaengine: Add Broadcom SBA RAID driver dt-bindings: Add DT bindings document for Broadcom SBA RAID driver .../devicetree/bindings/dma/brcm,iproc-sba.txt | 29 + crypto/async_tx/async_pq.c | 5 +- drivers/dma/Kconfig | 14 + drivers/dma/Makefile | 1 + drivers/dma/bcm-sba-raid.c | 1785 ++++++++++++++++++++ include/linux/raid/pq.h | 1 + lib/raid6/mktables.c | 20 + 7 files changed, 1852 insertions(+), 3 deletions(-) create mode 100644 Documentation/devicetree/bindings/dma/brcm,iproc-sba.txt create mode 100644 drivers/dma/bcm-sba-raid.c -- 2.7.4
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818692236.58/warc/CC-MAIN-20170925164022-20170925184022-00171.warc.gz
|
CC-MAIN-2017-39
| 3,033
| 2
|
https://lobste.rs/s/3lgr6a/ask_hn_why_is_bluetooth_audio_so
|
code
|
Bluetooth audio is so unreliable on Linux that I gave up pairing my headphones with the laptop.
Instead, I am using my Android smartphone to listen to music over my bluetooth headphones and it works like a charm.
There are multiple problems with bluetooth audio on my Linux machine, e.g. A2DP does not work after reconnecting or the connection gets lost randomly. I do not know a single person where bluetooth audio on Linux works out of the box and I have invested several hours reading
journalctl logs and trying to fix pulseaudio’s configuration but I still can’t pinpoint the source of the problem.
Somehow, other features like internet connection sharing over bluetooth seem to work fine.
What is your experience with bluetooth and especially bluetooth audio on Linux?
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497309.31/warc/CC-MAIN-20200330212722-20200331002722-00112.warc.gz
|
CC-MAIN-2020-16
| 777
| 6
|
https://gmi.skyjake.fi/gemlog/2021-11_ansi-sgr.gmi
|
code
|
I've been thinking about Gemtext content vs. presentation directives.
It's a pretty interesting situation. In an "ideal" world, Gemtext would have no way for authors to specify visual attributes for the content, and clients could freely style pages as they see fit.
However, even Solderpunk's 100-line Python example client supports ANSI styling, since it's something that a terminal emulator handles for you automatically. One has to specifically prevent ANSI control sequences from reaching the terminal to avoid this. This leaves us in a situation where many (terminal-based) clients — including the one you write yourself — can just assume as a given that ANSI control sequences can be used for things like colored ASCII/Unicode art and highlighting words for emphasis.
Of course, this is all thanks to the environment where the program is running. The confluence of history has brought us a system where one can insert hidden control sequences in text, in a standardized fashion, and have it modify text appearance (and cursor position) on screen.
The Gemini protocol should be agnostic of such things, as it doesn't assume the use of a terminal emulator, but in practice with Gemini being so heavily text-focused, the terminal is one of the foremost and sometimes even the most preferable environment to use. Thus I think it would help for one of these things to happen:
Getting rid of ANSI styling altogether would resolve the ambiguity neatly, but it could also make writing a Gemtext parser more complicated, and restrict the potential applications for Gemtext. Parsing is perhaps the more serious issue: it's pretty trivial to use regular expressions to recognize the sequences and skip them, but a regex library is another dependency that may not be available for everyone.
Fully embracing ANSI styling has its own fallout effects. The source text may become unreadable in a normal text editor. The size of the content expands if control sequences are used heavily, with each sequence being several bytes long. There is no single agreed-upon way to interpret all of the sequences (the colors are perhaps the least ambiguous ones). There's the whole inline vs. external styling problem, akin to tags vs. CSS. And course, as discussed before, screen readers may produce nonsense garbled output when encountering these sequences, unless the client ensures they are filtered out.
Perhaps the status quo is preferable, despite the ambiguity. The user gets to have the final say on the matter, in their selection of environment, client, and configuration settings.
The original Gemtext version of this page can be accessed with a Gemini client: gemini://skyjake.fi/gemlog/2021-11_ansi-sgr.gmi
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818474.95/warc/CC-MAIN-20240423095619-20240423125619-00333.warc.gz
|
CC-MAIN-2024-18
| 2,701
| 9
|
https://unity.cn/releases/patch/5/5.6.4p1
|
code
|
(935563) - IL2CPP: Avoid stack overflow from occurring in Unity liveness logic (asset GC).
(944939) - IL2CPP: Allow SetSocketOption to work properly for add membership and remove membership with IPv6.
(none) - IL2CPP: Fixed calling System.Collections.Generic.IList1 methods on native objects that implement Windows.Foundation.Collections.IVector1 interface and calling Windows.Foundation.Collections.IVector1 methods on managed objects that implement System.Collections.Generic.IList1 interface.
(949032) - iOS: SystemInfo.supportedRenderTargetCount now correctly returns 8 for devices that support it.
(939661) - Lines: Fixed a case where looping lines with corner vertices were causing graphical corruption.
(952232, 952020) - Metal: Fixed a shader compilation regression on macOS 10.11.6 and iOS 8.x and earlier.
(none) - Multiplayer: Fixed update timers internal time after io thread resuming.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496666229.84/warc/CC-MAIN-20191113063049-20191113091049-00267.warc.gz
|
CC-MAIN-2019-47
| 897
| 7
|
https://nicholasworkshop.wordpress.com/tag/website/
|
code
|
Well, it is always good to have a refreshing layout. I spent like 2 days to tweak my website layout. It’s really awesome. Useless widgets are removed and background changed. There is even a whole new typography. I love it so much!
For both server and website developing, I usually use Xampp’s Apache to create a localhost server. However, I hate to put my important files in the htdocs inside Apache. Instead I would rather put the folder inside Documents, and make a link to htdocs.
First, create a symbolic link from your actual folder to htdocs. For example, ln -s /Users/Nicholas/Documents/Xampp Workspace /Applications/Xampp/xamppfiles/htdocs/nicholas.
However, this is not done because the owner of your directory is Nicholas, but not “nobody”. Also, we cannot change the folder’s owner to nobody as this is actually an illegal owner. So what we gonna do is add Nicholas as the user in Apache. [Caution: This can actually create a security issue on your computer, since now the Xampp can access files of the user Nicholas]
To do that, add “user Nicholas” to the end /Applications/Xampp/xamppfiles/etc/httpd.conf. Then restart Xampp.
Actually, this website was my portfolio webpage before becoming a wordpress. It is moved to here: nicholasworkshop.com/portfolio. Using wordpress, I put the old webpage aside and always forgot to transfer the information from it to the front page. Anyway, I hope this can remind me of the projects and webpages I created.
Today, I moved my nicholasworkshop.wordpress.com to my owned domain nicholasworkshop.com, integrated with my old portfolio website. Thanks to my subscribers, my blog has been having a steady viewing rate over time. However, all those traffics went to wordpress.com as the blog was hosted there. To redirect them back to my domain, I have been preparing the things needed for a long time.
Anyways, it is finally here and thank you for checking out my blog!
It’s quite often for a developer to make use of the user agent to determine what browser does a user used, especially for those who develop web services and websites. Recently I found a website which has a huge database of mobile device information, including the user agent string and even the functions supported in the device browser.
Usually while we extract information from other websites, character encoding might not correct. For example, j-jis.com is encoded with Shift-JIS. Therefore if your MySQL charset is UTF-8, it causes incorrect information. To handle the changing of character encoding, we can use
iconv in PHP.
$url = "http://j-jis.com/"; $html = file_get_contents($url); $html = preg_replace("/rn|n|t/", "", $html); //remove unwanted characters $html = iconv("Shift_JIS","UTF-8",$html); //convert encoding echo $html;
Json has been an efficient way to handle information and message exchanges in web programming. For example, I usually use PHP to connect MySQL and retrieve information, then display as Json. So a webpage can “AJAX” the displayed Json to create a dynamic view on itself. However, while the information involves unicode characters, PHP turn them into unreadable codes. After google, I found the following is the best way to solve the problem.
$string = '你好嗎'; echo json_encode($string); //Output "u4f60u597du55ce" echo preg_replace("#u([0-9a-f]+)#ie", "iconv('UCS-2', 'UTF-8', pack('H4', '1'))", json_encode($string)); // Output "你好嗎"
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423320.19/warc/CC-MAIN-20170720181829-20170720201829-00372.warc.gz
|
CC-MAIN-2017-30
| 3,419
| 14
|
https://learn.microsoft.com/en-us/power-apps/maker/canvas-apps/working-with-data-sources
|
code
|
Understand data sources for canvas apps
In Power Apps, most canvas apps use external information stored in cloud services called Data Sources. A common example is a table in an Excel file stored in OneDrive for Business. Apps access these data sources by using Connections.
This article discusses the different kinds of data sources and how to work with table data sources.
It's easy to create an app that does basic reading and writing to a data source. But sometimes you want more control over how data flows in and out of your app. This article describes how the Patch, DataSourceInfo, Validate, and Errors functions provide more control.
Kinds of data sources
Data sources can be connected to a cloud service, or they can be local to an app.
Connected data sources
The most common data sources are tables, which you can use to retrieve and store information. You can use connections to data sources to read and write data in Microsoft Excel workbooks, lists created using Microsoft Lists, SharePoint libraries, SQL tables, and many other formats, which can be stored in cloud services such as OneDrive for Business, DropBox, and SQL Server.
Data sources other than tables include email, calendars, Twitter, and notifications, but this article doesn't discuss these other kinds of data sources.
Local data sources
When you ask Power Apps to create an app from data, these controls are used. Behind the scenes, the app uses an internal table to store and manipulate the data that comes from the data source.
A special kind of data source is the Collection, which is local to the app and not backed by a connection to a service in the cloud, so the information can not be shared across devices for the same user or between users. Collections can be loaded and saved locally.
Kinds of tables
Tables that are internal to a Power Apps app are fixed values, just as a number or a string is a value. Internal tables aren't stored anywhere, they just exist in your app's memory. You can't directly modify the structure and data of a table. What you can do instead is to create a new table through a formula: you use that formula to make a modified copy of the original table.
External tables are stored in a data source for later retrieval and sharing. Power Apps provides "connections" to read and write stored data. Within a connection, you can access multiple tables of information. You'll select which tables to use in your app, and each will become a separate data source.
To learn more, Working with tables goes into more detail about internal tables, but it is also applicable to external tables residing in a cloud service.
Working with tables
You can use table data sources the same way that you use an internal Power Apps table. Just like an internal table, each data source has records, columns, and properties that you can use in formulas. In addition:
The data source has the same column names and data types as the underlying table in the connection.
For SharePoint and Excel data sources that contain column names with spaces, Power Apps will replace the spaces with "_x0020_". For example, "Column Name" in SharePoint or Excel will appear as "Column_x0020_Name" in Power Apps when displayed in the data layout or used in a formula.
The data source is loaded from the service automatically when the app is loaded. You can force the data to refresh by using the Refresh function.
As users run an app, they can create, modify, and delete records and push those changes back to the underlying table in the service.
Creating data sources
Power Apps can't be used to create a connected data source, or modify its structure; the data source must already exist in a service somewhere. For example, to create a table in an Excel workbook stored on OneDrive, you first use Excel Online on OneDrive to create a workbook. Next you create a connection to it from your app.
However, collection data sources can be created and modified inside an app, but are only temporary.
Display one or more records
The diagram above shows the flow of information when an app reads the information in a data source:
- The information is stored and shared through a storage service (in this case, Microsoft Lists or SharePoint Online).
- A connection makes this information available to the app. The connection takes care of authentication of the user to access the information.
- When the app is started or the Refresh function is pressed, information is drawn from the connection into a data source in the app for local use.
- Formulas are used to read the information and expose it in controls that the user can see. You can display the records of a data source by using a gallery on a screen and wiring the Items property to the data source: Gallery.Items = DataSource. You wire controls within the gallery, to the gallery, using the controls' Default property.
- The data source is also a table. So you can use Filter, Sort, AddColumns, and other functions to refine and augment the data source before using it as a whole. You can also use the Lookup, First, Last, and other functions to work with individual records.
Modify a record
In the preceding section, you saw how to read a data source. Note that the arrows in the diagram above are one way. Changes to a data source aren't pushed back through the same formulas in which the data was retrieved. Instead, new formulas are used. Often a different screen is used for editing a record than for browsing records, especially on a mobile device.
Note that, to modify an existing record of a data source, the record must have originally come from the data source. The record may have traveled through a gallery, a context variable, and any number of formulas, but its origin should be traceable back to the data source. This is important because additional information travels with the record that uniquely identifies it, ensuring that you modify the correct record.
The diagram above shows the flow of information to update a data source:
- An Edit form control provides a container for input cards, which are made up of user input controls such as a text-input control or a slider. The DataSource and Item properties are used to identify the record to edit.
- Each input card has a Default property, which is usually set to the field of the form's ThisItem record. The controls within the input card will then take their input values from Default. Normally you do not need to modify this.
- Each input card exposes an Update property. This property maps the user's input to a specific field of the record for writing back to the data source. Normally you do not need to modify this.
- A button or an image control on the screen enables the user to save changes to the record. The OnSelect formula of the control calls the SubmitForm function to do this work. SubmitForm reads all the Update properties of the cards and uses this to write back to the data source.
- Sometimes there will be issues. A network connection may be down, or a validation check is made by the service that the app didn't know about. The Error and ErrorKind properties of the form control makes this information available, so you can display it to the user.
For more fine grained control over the process, you can also use the Patch and Errors function. The Edit form control exposes an Updates property so that you can read the values of the fields within the form. You can also use this property to call a custom connector on a connection, completely bypassing the Patch and SubmitForm functions.
Before making a change to a record, the app should do what it can to make sure the change will be acceptable. There are two reasons for this:
- Immediate feedback to the user. The best time to fix a problem is right when it happens, when it is fresh in the user's mind. Literally with each touch or keystroke, red text can appear that identifies an issue with their entry.
- Less network traffic and less user latency. More issues detected in the app means fewer conversations over the network to detect and resolve issues. Each conversation takes time during which the user must wait before they can move on.
Power Apps offers two tools for validation:
- The data source can provide information about what is and isn't valid. For example, numbers can have minimum and maximum values, and one or more entries can be required. You can access this information with the DataSourceInfo function.
- The Validate function uses this same information to check the value of a single column or of an entire record.
Great, you've validated your record. Time to update that record with Patch!
But, alas, there may still be a problem. The network is down, validation at the service failed, or the user doesn't have the right permissions, just to name a few of the possible errors your app may encounter. It needs to respond appropriately to error situations, providing feedback to the user and a means for them to make it right.
When errors occur with a data source, your app automatically records the error information and makes it available through the Errors function. Errors are associated with the records that had the problems. If the problem is something the user can fix, such as a validation problem, they can resubmit the record, and the errors will be cleared.
If an error occurs when a record is created with Patch or Collect, there is no record to associate any errors with. In this case, blank will be returned by Patch and can be used as the record argument to Errors. Creation errors are cleared with the next operation.
The Errors function returns a table of error information. This information can include the column information, if the error can be attributed to a particular column. Use column-level error messages in label controls that are close to where the column is located on the edit screen. Use record-level error messages where the Column in the error table is blank, in a location close to the Save button for the entire record.
Working with large data sources
When you are creating reports from large data sources (perhaps millions of records), you want to minimize network traffic. Let's say you want to report on all Customers having a StatusCode of "Platinum" in New York City. And that your Customers table contains millions of records.
You do not want to bring those millions of Customers into your app, and then choose the ones you want. What you want is to have that choosing happen inside the cloud service where your table is stored, and only send the chosen records over the network.
Many, but not all, functions that you can use to choose records can be delegated, which means that they are run inside the cloud service. You can learn how to do this by reading about Delegation.
Collections are a special kind of data source. They're local to the app and not backed by a connection to a service in the cloud, so the information can not be shared across devices for the same user or between users. They operate like any other data source, with a few exceptions:
- Collections can be created dynamically with the Collect function. They don't need to be established ahead of time, as connection-based data sources do.
- The columns of a collection can be modified at any time using the Collect function.
- Collections allow duplicate records. More than one copy of the same record can exist in a collection. Functions such as Remove will operate on the first match they find, unless the All argument is supplied.
- You can use the SaveData and LoadData functions to save and reload a copy of the collection. The information is stored in a private location that other users, apps, or devices can't access.
- You can use the Export and Import controls to save and reload a copy of the collection to a file that the user can interact with.
For more information on working with a collection as a data source, see create and update a collection.
Collections are commonly used to hold global state for the app. See working with variables for the options available for managing state.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510238.65/warc/CC-MAIN-20230927003313-20230927033313-00665.warc.gz
|
CC-MAIN-2023-40
| 12,015
| 65
|