url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
https://www.improvingwetware.com/2015/07
|
code
|
Posted by Pete McBreen 30 Jul 2015 at 13:28
And then this article about testing — Doing Terrible Things To Your Code reminded me to look at it again.
QA Engineer walks into a bar. Orders a beer. Orders 0 beers. Orders 999999999 beers. Orders a lizard. Orders -1 beers. Orders a sfdeljknesv.
I sure wish more programmers would focus a lot of attention on testing their own code before passing it on to QA/Test. That way the QA/Test team can focus on finding the requirements and interaction defects, rather than the simple coding mistakes that are often the bane of their existence
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679515260.97/warc/CC-MAIN-20231211143258-20231211173258-00873.warc.gz
|
CC-MAIN-2023-50
| 582
| 4
|
https://www.biginsight.no/events/2018/8/16/biostatistics-seminar-gabriela-gomes
|
code
|
We’re happy to announce the first biostatistics seminar of this semester, Thursday August 16th at 14.30, in room 2180, Domus Medica, Oslo.
Speaker: Gabriela Gomes, Reader in Biomathematics, Liverpool School of Tropical Medicine, UK, and Centro de Investigação em Biodiversidade e Recursos Genéticos, Universidade do Porto, Portugal.
Title: A unified framework to account for unobserved heterogeneity in demography, epidemiology, ecology and evolution.
Abstract: Unobserved heterogeneity was introduced in 1920 as a modifier of individual hazards. The concept was termed frailty in demography to describe variation in individual longevity, and has been incorporated in methods for survival analysis. As the frailest individuals are removed earlier from a heterogeneous group, mean hazards appear to decrease over time – cohort selection – leading to some of the most elusive effects in population sciences. Despite the accumulation of documented fallacies induced by cohort selection, the issue remains largely overlooked. I will expose the ubiquity of the phenomenon and propose a unified framework to infer and compare trait distributions, with examples of current interest in epidemiology, ecology, and evolution: (1) Vaccines appear less effective in high-incidence settings. Are they, really? (2) What is the intrinsic effect of Wolbachia on mosquito susceptibility to dengue viruses? (3) As populations of bacteria are exposed to antibiotics, their mortality rates decline due to selection for noninherited resistance. How does this affect the measurement of fitness effects of new mutations? (4) What does cohort selection add to the debate between neutral and niche theories of biodiversity?
Jon Michael Gran
More seminars will be announces soon, see http://www.med.uio.no/imb/english/research/centres/ocbe/events/biostat-seminar/
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573368.43/warc/CC-MAIN-20190918213931-20190918235931-00319.warc.gz
|
CC-MAIN-2019-39
| 1,846
| 6
|
https://learningnetwork.cisco.com/thread/42625
|
code
|
Thanks for your reply, the reason I want to know because the book didn't talk too much about it, it was a brief definition with simple example. the author however mentioned that (for example EEM) is a huge topic that can not be covered in this book.
That got me wondering shuold I do more research to learn these and if so, how much should I learn.
If somebody goes to the exam without knowing BGP, I am sure you would say, mate don't bother taking the exam, because its an important subject.
thanks for your reply again.
There are core topics you would know. They are IGPs, BGP, VLANs etc. Without knowing them you will fail core task and therefore loose many points because you will not complete not only core tasks but all others too. For example, IP multicast tasks are based on IP unicast routing and you need IGP to be set correctly for it. So by failing 1-2 core tasks you obviously fail the entire exam.
All other topics are secondary. They are not so much critical. Theoretically you may fail many of them and pass the entire exam. All topics you mentioned are secondary as I can see. By failing NetFlow you probably loose only 2-3 points.
But Secondary doesn't mean Simple. OER/PfR, EEM, SLA etc. are rather complex and really huge.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945637.51/warc/CC-MAIN-20180422174026-20180422194026-00327.warc.gz
|
CC-MAIN-2018-17
| 1,242
| 7
|
http://velocity.uwaterloo.ca/2012/12/facebook-hackathon/
|
code
|
Waterloo Wins This Year’s Facebook Hackathon!
Waterloo did it again.
And by again, we mean took home the championships for another Facebook Hackathon. This time, it was the Facebook Global University Hackathon.
The original team that competed earlier on this year in the first round, the Facebook Waterloo Hackathon, consisted of Peter Sobot, Jinny Kim, and Scott Greenlay. They won regionally with NinjaQuote, a Facebook game that tests how well you know your friends. This is the second win for the University of Waterloo, with last year’s Facebook Waterloo winners being current and former VeloCity residents, Christophe Biocca, Akash Vaswani, Drew Gross, and Carlo Barraco.
Facebook has ran approximiately 18 hackathons, mostly in the U.S, with a few in Canada (UBC and Waterloo), Brazil, and Ukraine. Winners from each of these hackathons went on to compete at Facebook HQ in Menlo Park, CA for three days (November 28 – 30) – the finals.
Three solid days of hack and awesomeness and they’ve earned the title of this year’s Facebook Hackathon Champions!
In the Facebook finals, the team that built NinjaQuote was joined by an additional team member, Fravic Fernando (replacing Peter, who could not make it). This time around, the VeloCity residents worked on an iPhone application called Quin.
Quin allows you to ask your phone questions about your Facebook friends using your voice, and returns you graphs showing various attributes (e.g. gender, popularity, relationship status). It visualizes data in a way that normal Facebook search can’t. If you’re curious, you can check out the source for the hack on GitHub.
The University of Waterloo team beat out 17 other competing teams with Quin and took home $3,000! Watch Scott talk about the win in his VeloCity hoodie on CTV National News here.
A big thanks to TNW for their coverage of Waterloo’s big win!
As for whether or not Quin will live beyond the walls of Facebook’s Hackathon, time will tell, stay tuned!
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141460.64/warc/CC-MAIN-20200217000519-20200217030519-00072.warc.gz
|
CC-MAIN-2020-10
| 1,989
| 11
|
https://digilent.com/reference/learn/programmable-logic/tutorials/nexys-video-oled-demo/start
|
code
|
Nexys Video OLED Demo
The OLED Demo project demonstrates a simple usage of the Nexys Video's Organic Light Emitting Diode (OLED) Display.
|8 user switches||X|
|8 user LEDs||X|
|128×32 monochrome OLED display||X|
|160-pin FMC LPC connector||X|
|Micro SD card connector||X|
|HDMI Sink and HDMI Source||X|
|Audio codec w/ four 3.5mm jacks||X|
|6 user push buttons||X|
|10/100/1000 Ethernet PHY||X|
|512MiB 800Mt/s DDR3 Memory||X|
|Four Pmod ports||X|
|Pmod for XADC signals||X|
|USB HID Host||X|
- Basic familiarity with Vivado
- This experience can be found by walking through our “Getting Started with Vivado” guide
- Nexys Video FPGA board
- Micro-USB cable
- Zedboard 12 Volt Power Supply
- Vivado Design Suite 2016.4
- Newer/older versions can be used, but the procedure may vary slightly
Nexys Video OLED Demo Project Repository – ZIP Archive GIT Repo
Download and Launch the Nexys Video OLED Demo
1) Follow the Using Digilent Github Demo Projects Tutorial. This is an HDL design project, and as such does not support Vivado SDK, select the tutorial options appropriate for a Vivado-only design. Return to this guide when prompted to check for extra hardware requirements and setup.
2) Ensure that your board is plugged into a 12 Volt power supply and connected to your computer via a Micro USB cable attached to the PROG port. Then return to the Github Projects Tutorial to finish programming the demo onto your board.
Using the Nexys Video OLED Demo
This portion will help you run the demo and observe all its features.
4.1) Startup and Bringdown
The procedures for safely turning on and off the OLED display are handled by the CPU Reset Button. When the board is first turned on, the display is off, and must be brought up, press the CPU Reset Button to turn it on. Once the display is on, when you are done operating the demo, and want to turn your board off, press the CPU Reset Button again to turn the display off. The status of the display is indicated by LED0, if it is on, the display is on.
4.2) Toggle the Display
Once the display has been turned on, each LED on the display can be lit up at once by pressing the center D-Pad button. To return the display to its original state, press this button again.
4.3) Update the Display
With the display on, you can load text onto the display by pressing the up D-Pad button. To clear the display again, press the down D-Pad button. Toggling the entire display while text is loaded will still turn each LED on, then return the display to the splash screen text.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946637.95/warc/CC-MAIN-20230327025922-20230327055922-00643.warc.gz
|
CC-MAIN-2023-14
| 2,524
| 34
|
https://ms.copernicus.org/articles/2/27/2011/ms-2-27-2011.html
|
code
|
The use of underactuation in prosthetic grasping
Abstract. Underactuation as a method of driving prosthetic hands has a long history. The pragmatic requirements of such a device to be light enough to be worn and used regularly have meant that any multi degree of freedom prosthetic hand must have fewer actuators than the usable degrees of freedom. Aesthetics ensures that while the hand needs five fingers, five actuators have considerable mass, and only in recent years has it even been possible to construct a practical anthropomorphic hand with five motors. Thus there is an important trade off as to which fingers are driven, and which joints on which fingers are actuated, and how the forces are distributed to create a functional device. This paper outlines some of the historical solutions created for this problem and includes those designs of recent years that are now beginning to be used in the commercial environment.
This paper was presented at the IFToMM/ASME International Workshop on Underactuated Grasping (UG2010), 19 August 2010, Montréal, Canada.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510085.26/warc/CC-MAIN-20230925183615-20230925213615-00355.warc.gz
|
CC-MAIN-2023-40
| 1,068
| 3
|
http://www.theflyfishingforum.com/forums/526386-post3.html
|
code
|
Re: Fly Line HELP!
Welcome to the forum!
It sounds to me like you are in need of a good all around line....
Scientific Anglers Mastery Series GPX WF5F should be right up your alley.
I am sure you will a dozen different answers and they most likely will all be correct. The lesson to take from this is that there isn't really a "best" line - it's all about what line you can afford and you like which is something that only time and experience will teach you.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119642.3/warc/CC-MAIN-20170423031159-00581-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 458
| 5
|
https://www.theladders.com/job/senior-lead-cloud-engineer-amfam-jackson-mn_46283787
|
code
|
Job Family Summary
Drives IT engineering solutions, framework, roadmap, program optimization and process engineering in a high-velocity culture by introducing technology, requirements, deliverables, gaps and systems design. Analyzes competitive strategies, infrastructure technologies, metrics models, and performance indicators. Contributes to robust and innovative strategic solutions and builds resilient support for next-generation systems to solve business challenges.
Job Level Summary
- Recognized as an expert within the company and requires in-depth and/or breadth of expertise in own job discipline and broad knowledge of other job disciplines within the organization function.
- Solves unique problems that have a broad impact on the business.
- Contributes to the development of organization functional strategy.
- Progression to this level is typically restricted on the basis of business requirements.
- Identity management experience (Okta, Active Directory, etc.)
- Experience in writing scripts in one or more languages such as Python, Go, or similar.
- Be a trusted technical advisor to customers and set up their Cloud Foundations.
- You come with prior experience in public cloud (AWS, GCP or others), automation tools such as CloudFormation, Terraform, etc.
- Experience with cluster deployment and orchestration technologies (e.g., Puppet, Chef, Salt, Ansible, Docker, Kubernetes, Mesos, Jenkins).
- You have experience managing cloud infrastructure in AWS, GCP, Azure, or similar (We use AWS & GCP) You have built and deployed containerized applications and services to Kubernetes (e.g. GKE, EKS)
- Experience with transactional database systems (MySQL, PostgreSQL, MongoDB, and Cassandra) or data analytics tools.
- Experience in system administration tasks in Linux, Unix, or Windows and familiarity with standard IT security practices (e.g., encryption, certificates, key management).
- Design, build, and operate the core platform infrastructure (e.g. cloud resources, container orchestration, continuous deployment) used by all Amfam’s engineering teams.
- Diagnose and fix problems stemming from complex interactions of a wide range of components in a modern service platform.
- Guide the broader team in making intelligent and pragmatic technical trade-offs.
- Act in key support roles during major incidents. Participate in the post-incident review process. Write and maintain runbooks and documentation.
- Design and implement processes to maintain and improve organization-wide reliability.
- Optimize build pipeline and tooling to reduce CI/CD times.
Specialized Knowledge & Skills Requirements
- 5+ years of applicable engineering experience
- Strong coding skills in one or more languages - we mostly write Python and Go at Amfam, but experience with others is also valuable
- Experience with creating and managing AWS/GCP/Azure resources with Terraform
- Familiarity with monitoring systems such as Dynatrace and cloud native tools.
- Experience with modern networking aspects
- Practical experience building and administering Kubernetes clusters
- Ability to work independently with minimal supervision
- Ability to participate in a 24/7 on-call rotation
- This position requires travel up to 10% of the time.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00445.warc.gz
|
CC-MAIN-2021-25
| 3,249
| 31
|
https://www2.thunderheadeng.com/2015/01/creating-t-squared-fire-pyrosim/
|
code
|
Often, a fire size and growth rate (Fast, Slow, etc. ‘alpha’ values) are prescribed, and the time to reach the peak Heat Release Rate per Unit Area (HRRPUA) based on the specified alpha value is unknown. In PyroSim, there is not a direct way to enter or select the alpha value for a t-squared fire to determine the time to reach the peak. This short video will show you how you can determine the time to peak HRRPUA, based on standard alpha values or a custom value you input, through a web based tool.
Please note, if you know the peak HRRPUA and the time in which you would like it to reach this peak, then you can enter the values directly into PyroSim.
Link to calculator: http://www.koverholt.com/t-squared-fire-ramp-calculator/
PyroSim file: t^squared Example File
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506329.15/warc/CC-MAIN-20230922034112-20230922064112-00674.warc.gz
|
CC-MAIN-2023-40
| 774
| 4
|
http://www.adisc.org/forum/showthread.php/94909-Morals
|
code
|
So I have been using in the closet for roughly 10 years now. And as much as I enjoy it, it kinda makes me feel bad. I live with a physically and mentally disabled brother who is roughly the same size as me and I pretty much have an unlimited unquestionable stash hidden in plain sight. The problem, is it wrong for me to use these because I don't need them but enjoy them. Or because he doesn't have a shortage in them make it ok for me to continue. I just want some feedback.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720845.92/warc/CC-MAIN-20161020183840-00531-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 476
| 1
|
https://e-militaria.eu/holsters-46
|
code
|
Pistol holster with additional, mechanical security system. Designed for use in dynamic actions or in large crowd. The additional security system prevents loosing or seizure of the weapon. The holster features comfortable belt clip.
Brought back by popular demand, the first holster created by Sandro Amadini is re-introduced with a new design and locking system which consists of a polymer locking sphere providing security and speed.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514578201.99/warc/CC-MAIN-20190923193125-20190923215125-00034.warc.gz
|
CC-MAIN-2019-39
| 435
| 2
|
https://github.com/nipy/dipy/issues/1473/
|
code
|
Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Uploading Windows wheels #1473
I've now built 0.13.0 wheels for all versions of Windows we support, over at:
with driving repo at:
There's a single test failure for Python 3.6, 32-bit - described at #1472 . I suppose it's benign? It looks like it's a numpy issue rather than a Dipy one. So, for that wheel, I just temporarily skipped the tests before upload.
Any objection to pushing up the built wheels?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823712.21/warc/CC-MAIN-20181212022517-20181212044017-00122.warc.gz
|
CC-MAIN-2018-51
| 562
| 7
|
https://www.wvm.edu/committees/bac/Pages/default.aspx
|
code
|
The purpose of the District Benefits Committee is based on the common elements from agreements of the collective bargaining units and unrepresented groups:
- Research and share information with its constituencies
- Act in an advisory capacity to its constituencies
- Findings and recommendations shall be presented to the constituent groups
- All changes in fringe benefits have to be negotiated
for the District Benefits Committee bylaws.
The new employee benefit webpage can be found at:https://www.smartben.com
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100287.49/warc/CC-MAIN-20231201120231-20231201150231-00246.warc.gz
|
CC-MAIN-2023-50
| 513
| 7
|
https://lists.x.org/archives/xorg/2007-March/022464.html
|
code
|
patrol at sinus.cz
Sat Mar 17 05:18:28 PDT 2007
Hi Keith, writing again, I've progressed a bit...
> A code snippet from this file:
> dev->DevName = "ddc2";
> dev->SlaveAddr = 0xA0;
> dev->ByteTimeout = 2200; /* VESA DDC spec 3 p. 43 (+10 %) */
> dev->StartTimeout = 550;
> dev->BitTimeout = 40;
> dev->ByteTimeout = 40;
> dev->AcknTimeout = 40;
> Which of the two dev->ByteTimeout assignents is correct ? I tried to comment
> out the second one (with the lower value) and also increased all the values
> up to four times. It looked that it helps, but then it turned out to be
> probably influenced by my mental power :-); after a lot of experiments, a
> good/bad ratio returned to the value approximately the same as in the
I was playing with those, tried various combinations of timeout, but no success.
Then, I studied the code a bit more and found a RETRIES macro defined as 4. At
first, I've changed it to 400 :-). The server started about 10 seconds then but
EDID was always successful. Then I decreased it to 40. Now it starts faster and
from about 20 attempts it failed once to read EDID.
There is definitely a bug somewhere, but this seems to be a reasonable
workaround for me.
Then I've found that my PreferredMode option contained a typo (1599 instead
of 1600, yes, all the right-hand fingers were off by one :-) ). After that, it
starts in 1600x1200 with EDID (it still uses 1152xsomething without it).
The DPI and screen size reported are correct now. I think that the problems
which I observed formerly were caused by the fact that the server reported
different things depending on EDID being ok or not. This could confuse the KDE
info center, fontconfig xft and other packages.
So now I can use the driver for the testing purposes. I will carefully watch
for problems and report them here.
With regards, Pavel Troller
More information about the xorg
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948684.19/warc/CC-MAIN-20230327185741-20230327215741-00466.warc.gz
|
CC-MAIN-2023-14
| 1,863
| 34
|
https://owasp.org/www-project-devsecops-guideline/
|
code
|
OWASP DevSecOps Guideline
The OWASP DevSecOps Guideline focuses on explaining how we can implement a secure pipeline and using best practices and introduce tools that we can use in this matter. Also, the project trying to help us for promoting the shift-left security culture in our development process.
This project helps any companies in each size that have development pipeline or in other words have DevOps pipeline. During this project, we try to draw a perspective of a secure DevOps pipeline and then improve it based on our customized requirements.
At first, we consider to implement the following steps in a basic pipeline:
- Take care secrets and credentials in git repositories
- SAST (Static Application Security Test)
- DAST (Dynamic Application Security Test)
- Infrastructure scanning
- Compliance check
Feel free to contribute to this project, any contributors are welcome to make a PR on the project repo.
Contributing on this project is so simple, please go to project’s GitHub repo and then send a new pull request.
Please do not hesitate to create an issue if you have any idea or recommandation.Share your idea or recommandation
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00035.warc.gz
|
CC-MAIN-2021-43
| 1,151
| 12
|
https://acrobat.uservoice.com/forums/926812-acrobat-reader-for-windows-and-mac/suggestions/43010331-prevent-adobe-reader-dc-window-from-move-resize-zo?category_id=379459
|
code
|
Prevent Adobe Reader DC window from move/resize/zoom from pdf file InitialView definition
I have usually opened few pdf files on my 3rd monitor (datasheets, manuals,etc - documentation for somethig that I work on right now), I also send inquires and initialize orders for things I need to... When I open pdf with invoices/inquires those files tends to move the Adobe Reader window to my center (primary) monitor and resize it (and change zoom) - it drives me mad.
I did a bit research and this must happens because those files have defined so called "InitialView", so here is my petition:
Add option that prevent window Adobe Reader DC from moving and resizing when opening pdf file that have this "annoying InitalView" set up (e.g. checkbox with name: "Ignore InitialView of pdf document")
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304261.85/warc/CC-MAIN-20220123111431-20220123141431-00462.warc.gz
|
CC-MAIN-2022-05
| 790
| 4
|
https://huntr.dev/bounties/9c229475-7b2f-44bf-ae74-15b23f3d27ce/
|
code
|
CSV Injection in CSV files generated by the backend in limesurvey/limesurvey
Mar 12th 2023
1 login in https://demo.limesurvey.org/index.php
2 the demo admin create a user with name "=1+cmd|'/C calc'!A0".
4 other users login and download all the users' data as csv.
5 other users open the csv file with execl in windows, notice that choose ";" as separator as.
6 we can see that the calculator is opened.
see the poc : https://1drv.ms/v/s!AksJ421iyCG-mTLhbaTcZ8yrfDaq?e=5zhBH5
see https://owasp.org/www-community/attacks/CSV_Injection to fix it.
# Impact Hijacking the user’s computer Exfiltrating contents from the spreadsheet, or other open spreadsheets.
The researcher has received a minor penalty to their credibility for miscalculating the severity: -1
Carsten Schmitz validated this vulnerability 2 months ago
lujiefsi has been awarded the disclosure bounty
The fix bounty is now up for grabs
The researcher's credibility has increased: +7
Carsten Schmitz marked this as fixed in 5.6.11 with commit 953122 2 months ago
The fix bounty has been dropped
This vulnerability will not receive a CVE
This vulnerability is scheduled to go public on Mar 27th 2023
to join this conversation
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648850.88/warc/CC-MAIN-20230602172755-20230602202755-00796.warc.gz
|
CC-MAIN-2023-23
| 1,187
| 20
|
https://forums.powershell.org/t/get-winevent-reading-security-logs/6717
|
code
|
I am in the process of creating a remoting endpoint to grant permission to read the Security event logs. I’ve created a group and placed it in the Event Log Readers group. This lets me read the Security events by opening eventvwr.msc. However, I am still getting ‘unauthorized operation’ errors when trying to read the events via:
Get-WinEvent -logname Security
Get-EventLog -logname Security
Oddly enough, I can get the events with a WMI query:
Get-WmiObject -Query "Select * from win32_ntlogevent where (logfile='security')"
Are there additional permissions I need to configure in order to read events via Get-WinEvent?
You’re getting an error when running these commands inside the endpoint you’ve created? And are the commands attempting to read from the local computer where the endpoint exists, or from a remote computer?
Yes, I am getting the error when running the commands both inside and outside the endpoint.
The commands are attempting to read from the local computer where the endpoint exists, but I am getting the error while connected to the default PS endpoint as well (microsoft.powershell).
It’s entirely possible that the ports needed to communicate with those services aren’t open, although I’d think that’d “break” the GUI as well, so that might not be it.
My other guess would be authentication. When you’re using the GUI, you’re making one “hop” to the service, from the computer the GUI is running on to the machine you’re querying. But with remoting, your first “hop” is a delegated authentication into the endpoint. From there, if those commands are actually making a network request, you’re on a second “hop” which by default would not carry a security token. It’d be an anonymous request, in other words, which would likely fail. Get-WmiObject works a little differently when you query against localhost - it deliberately makes a local repository connection, so it’d make sense that it’s working.
This is a little tricky to fix. Especially with older protocols like RPC/DCOM (Get-EventLog), turning on CredSSP might not help. But, it’s worth reading about the double hop problem (“Secrets of PowerShell Remoting”, our Ebooks menu) so you understand the potential problem.
A potential fix is to assign a RunAs Credential to your custom endpoint. THAT credential would need permission to read the event logs, but the people remoting into the endpoint would not. You could then lock down who was allowed to enter the endpoint by using a group, and setting the ACL on the endpoint appropriately. This would eliminate the double hop, since the hop “from” the endpoint “to” the log service would carry a hardcoded token.
It’s also possible it’s a protocol problem. RPC/DCOM might be blocked.
I’m not sure if it is a double hop problem since I am also getting the error while running the cmdlets locally against my machine. Furthermore, I can successfully run the cmdlets with an elevated account within the custom endpoint AND outside of it.
Since WinRm runs under the Network Service account I’ve also tried adding it via the wevtutil command with no luck:
wevtutil sl security /ca:O:BAG:SYD:(A;;0xf0005;;;SY)(A;;0x5;;;BA)(A;;0x1;;;S-1-5-32-573)(A;;0x1;;;S-1-5-20)
This is purely a permissions problem then. If you can’t run it locally, you need to fix that first. Unfortunately that’s really outside powershell - without access to your environment I’m not going to be of any help troubleshooting it :(. Sorry.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00182.warc.gz
|
CC-MAIN-2022-33
| 3,509
| 18
|
https://www.techdirt.com/user/comments.php?u=terryhancock
|
code
|
"I've seen some discussion of Lib-Ray on several different forums which appears to betray some basic misunderstandings about what it is and what it isn't. I'm going to start with what it isn't, because most of the problems seem to arise from mistaking it for these things..."
Normally, I would agree with the advice to sit pat and let the lawyers do the talking. But this case is about public opinion and the nature of the law more than anything else.
It's political as much as it is legal.
It's unfortunate that Dotcom is neither attractive nor charismatic in person -- because that matters in politics.
I imagine that's not coincidence: with so many file locker services and file sharing services to choose from, the prosecutors could easily have gone shopping for just the right image they wanted on their "Wanted" poster -- they are engaged in politics and manipulation, not seeking justice.
In addition to the other arguments made against this, I would like to add this:
Using the term "property" to refer to intellectual monopolies causes a severe distortion of the debate for anyone who values private property, because "pro-intellectual property" positions are "anti-private property" positions. In other words, promoting intellectual property is only possible by violating the freedom of individual to be secure in their own private property.
Since this leads to a complete hash in logical terms, the use of the concept "intellectual property" is at best extremely confusing and politically motivated, even if you believe (as I do) that reasonable compromises can exist.
But in an honest debate we must acknowledge that state-granted monopolies on the use of information products must always entail a abrogation both of freedom of speech and of individual private property rights. Since these are both very serious things to violate, any such system has to be pretty limited to be reasonable. And that kind of reason is nowhere to be found amongst the proponents of "strong intellectual property" law.
I recently spent some time exploring whether AFTRA/SAG actors could work on free-culture projects without violating union rules (for my project, which needs voice actors for animation, the relevant union is apparently AFTRA). It appears that they can.
I think it would be really cool to see that happen, and I want to pursue it. So, it's really encouraging to see the artists resisting the legacy entertainment industry's heavy-handed tactics. It gives me hope that I'll be listened to when I start proposing this.
Still... it's just 189 signatures so far, and I'm not even sure how we know these are union members signing it (aside from the comments).
I found the graph really hard to interpret w.r.t. the subject.
It emphasizes relatively unimportant facts -- a histogram showing the distribution of "licit" media consumption. The secondary element is a histogram of "illicit" media consumption. Neither is what the article is about.
The thing you _wanted_ to communicate was that the percentage of illicit to licit use is increasing as we go to the right side of the chart.
To figure _this_ out, I have to mentally divide graphical bars of varying lengths (which may even be subject to optical illusion). As a result, the chart really doesn't help your point much. Just tabulating the numbers would probably be more effective.
But what would be a much better representation is to use a simple bar chart of the _percent illicit/licit use_ against the existing independent axis. This would normalize out the distribution information (which isn't very important), and make the relevant point (bars get bigger to the right) leap out at the reader.
I've never liked Netflix's streaming options. DRM. No Linux support. (And therefore) wasteful of bandwidth. Can't spool a whole movie, so I have to watch it in 5m snippets separated by 15m download. Plus it trashed the memory module on our Wii (only system in the house that would support it - I'm guessing it spooled to flash memory (!)). "Not interested, thank you."
Their DVD rental service is very good and fills a niche for rural customers who want access to a deep DVD catalog.
It has been good for that, and I'm not looking for the service to change. I'm delighted to (finally) be able to opt-out of their streaming business model and no longer pay for services I don't use.
Actually cell phone tech is a great example of why the strong IP model stunts innovation -- by comparison with the internet at large (dominated by the weak-IP models favored in internet standards development), cell phone networks are slow-growing, inefficient, and extremely limiting.
In fact, it's probably not unfair to say that most of the innovation that does exist in the cell phone space is really just parasitic use of the innovation from the internet at large -- almost every new cell phone feature you encounter is just a copy of something already in use on the web.
You might have a better case for the hardware, but even there the situation is ambiguous at best. The clear winners in hardware have been standardized commodity systems and even cell phones rely heavily on shared production of basic components and reliance on open standards for inter-component buses.
My conclusion since then is that the only way Sita is going to get converted to an open format is if somebody with Adobe Flash's animator installed renders the files to SWF format and publishes those.
If/when that happens, I think I (as well as many other people) can probably get it converted to SVG (and after that, conversion to many different open formats becomes possible).
But I am NOT willing to install a proprietary operating system or to buy Adobe's animation software just so I can make this initial conversion from FLA to SWF.
Nor am I willing to reverse-engineer FLA format and write a conversion library. There doesn't seem to be much interest in doing that. In fact, AFAICT, the only reason SWF is supported is because you needed that to _play_ Flash animations (SWF was intended as an opaque distribution format -- like a binary, and because Flash is a product of proprietary culture for proprietary animators, FLA, which was intended to be a source format, is generally not distributed, so there was little demand for reading it).
To clarify -- we are talking about access to the original vector graphics, for the purposes of making more sophisticated derivatives. The _video_ is of course already available in several free/open standard formats on Internet Archive, if you are satisfied with video snippets or frame-captures from the film.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123549.87/warc/CC-MAIN-20170423031203-00415-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 6,575
| 28
|
https://cafetran.freshdesk.com/support/discussions/topics/6000046061/page/2
|
code
|
Just as a question (Hans asked it a time ago on Proz.com).
As far as I understand, there is no Java and so no CafeTran on a standard Chromebook, but you might hack it and install a Ubuntu derivate (Chrubuntu). Most Chromebooks only have 2 or 4 GB RAM, not more.
Is this a viable, stable option, considering that 4 GB RAM is a kind of minimum for medium/bigger projects?
This is an old topic, but I would like to share my experience since it might be useful to someone else one day.
I just bought a Lenovo Chromebook S330, with an ARM CPU, 4 Gb of RAM and 64 Gb of disk space.
I first tried Cortini. It was OK for simple tasks, but not enough for my regular working environment.
I then tried Crouton to get a full Linux distribution in ChromeOs development mode. It worked well (perfectly with Ubuntu, not so well with Debian), but some apps that I need on a daily basis are not ARM compatible, so I decided to try a third option: CRD (Chrome Remote Desktop).
I installed it on my main computer (Linux Mint Debian Edition), then on the Chromebook.
This third option was to good one. I now have full access to my main computer from the Chromebook. The screen resolution and reactivity are excellent (no latency problem until now).
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00034.warc.gz
|
CC-MAIN-2023-50
| 1,228
| 9
|
https://support.higherlogic.com/hc/en-us/articles/360039835112-Check-Product-Status
|
code
|
If your site is experiencing issues and you have verified that the cause is not in your environment, visit the Higher Logic Product Status page.
The page clearly displays the current status of our products.
- The status to the right of the product name reflects the status of the most degraded component or service of the product.
IMPORTANT: If there is an issue, rest assured that we are investigating it, working to correct it, and will provide updates and information on this page as quickly as possible.
TIP: You can "subscribe" to this page in order to be automatically notified when updates (e.g., new incidents, resolved incidents) are posted to this page. Refer to Subscribe to updates, below, to learn how.
Subscribe to updates
On the main status page, you can subscribe to get real-time status updates by one of several methods.
NOTE: Higher Logic encourages all customers to subscribe to this page in order to receive updates as they're posted. This keeps you informed and mitigates case-creation, thereby allowing us to focus on remediating the issue.
- In the upper right of the page, click SUBSCRIBE TO UPDATES, and then click the:
- Envelope icon, specify your email address, and click SUBSCRIBE VIA EMAIL.
Telephone icon, specify your mobile phone number, and click SUBSCRIBE VIA TEXT MESSAGE.
- This option triggers text-message notifications, not phone calls.
- Slack icon and click SUBSCRIBE VIA SLACK.
- Conversation bubble icon to visit our support site.
- Atom Feed or RSS Feed to open a new tab that contains the feed code.
- X to close the dialog.
After you select a subscription method, you can choose which Higher Logic products to receive updates for.
- Verify your subscriber information.
- Choose one or more options in the list and click Save.
A confirmation message is sent to the contact point (your email address, your phone number, etc.) that you specified in the SUBSCRIBE dropdown.
- Confirm your subscription.
NOTE: You must confirm your subscription in order to receive status updates.
Manage your subscriptions
After subscribing to status updates, you can return to the Product Status page to change your subscription settings.
- Click SUBSCRIBE TO UPDATES and then click the notification method you're currently subscribed to in order to access the subscription-management page.
- Manage - Check the boxes to add/remove product notifications; click Save.
- Remove - Check the boxes to select product notifications; click Unsubscribe from updates. Click Unsubscribe from updates at the confirmation prompt.
A message is sent to your specified contact point to confirm the unsubscribe.
The Scheduled Maintenance section provides the latest information on our quarterly-maintenance schedule for Community, Marketing Enterprise, and Marketing Professional.
See Maintenance and Release Schedules for our quarterly-maintenance schedule for the current calendar year, as well as weekly-release information.
The Past Incidents section provides a chronological, daily "status report."
NOTE: Updates to any current issues are posted in this section.
- Below the list, click Incident History to access the Incident History page.
The page displays a three-month history, starting with the current month.
- Use the month toggle in the upper right to navigate in three-month increments.
- Click Filter Components to refine the list by product, as shown.
- Below the list, click Current Status to return to the main status page.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474669.36/warc/CC-MAIN-20240226225941-20240227015941-00020.warc.gz
|
CC-MAIN-2024-10
| 3,453
| 37
|
https://contentsense.jivrus.com/pricing
|
code
|
Content Sense is a great add-on for Commercial, Educational as well as Non-Profit organisations.
Content Sense is a PAID service.
Free trial extractions are just to see how it works before buying.
Usage and Quota
Usage is measured with number of extraction (from image files) performed by the user.
Transaction is successful extraction of the file (image).
Quota is based on the plan that user chooses. Choose appropriate plan below as per your need.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511106.1/warc/CC-MAIN-20231003124522-20231003154522-00536.warc.gz
|
CC-MAIN-2023-40
| 450
| 7
|
https://overwolf.github.io/api/changelogs/overwolf-platform/2021/march/166
|
code
|
Overwolf will now restore installed apps in cases where the local database got corrupted.
Updated OBS to version 26.1.1. - now we have the latest and best recording capabilities.
Improve the OBS crash reports.
Overwolf Appstore as a default extension - So the Overwolf Appstore has all grown up, it is now a default extension within Overwolf. Most of you won’t
even notice a change, and that’s good, but under the hood, there are several big differences.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100545.7/warc/CC-MAIN-20231205041842-20231205071842-00257.warc.gz
|
CC-MAIN-2023-50
| 458
| 5
|
https://nation.marketo.com/thread/48712-issue-with-the-not-had-interesting-moment-filter
|
code
|
I am trying to limit the number of times an IM can fire to twice per day. Currently, this is how I have the "Not Had Interesting Moment" filter set up:
However, I'm still seeing more than two of these fire for a single person in a day. Is there any issue with this logic? As far as using the min number of times and date of activity constraints?
UPDATE: Inactivity filters cannot be used with "Min. Number of Times" constraint. Marketo doesn't understand how to have NOT done something a minimum number of times. Can either set up custom score model, or reference a separate smart list to achieve goal here.
Confused why this is an option in the drop down on this filter.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912207146.96/warc/CC-MAIN-20190327000624-20190327022624-00435.warc.gz
|
CC-MAIN-2019-13
| 671
| 4
|
http://www.cgl.ucsf.edu/chimera/1.4/docs/UsersGuide/print.html
|
code
|
The contents of the Chimera graphics window or a raytraced image from POV-Ray can be saved to a file with File... Save Image. See also: copy, making movies, exporting a scene, tips on preparing images
When Maintain current aspect ratio is on, the image width/height ratio is constrained to match the width/height ratio of the graphics window (not including borders). Entering the width automatically adjusts the height and vice versa.
When Maintain current aspect ratio is off, both width and height can be set as desired. Grow to Fit resizes the graphics window to match the image aspect ratio by increasing one window dimension (i.e., width or height), while Shrink to Fit resizes the graphics window to match the image aspect ratio by decreasing one window dimension. If the window has not been grown or shrunk to match the image aspect ratio, a saved image will include more than what is displayed.
When dimensions are specified in units of length (anything other than pixels), they are converted to inches internally and multiplied by the Print resolution (dpi) to give the output pixel dimensions. The print resolution setting is saved in the preferences file.
Images can only be saved in the PNG format. First, POV-Ray input files containing the scene (*.pov) and raytracing options (*.ini) are generated, and then the raytracing calculation is run as a background job that can be monitored or canceled using the Task Panel.Otherwise:
The image will be rendered offscreen as permitted by the system. (Offscreen rendering is not supported by X11 in Mac OS or by certain older machines with other operating systems. On those systems, the image will be redrawn in the graphics window, piece by piece depending on the specified image size and degree of supersampling. The graphics window should not be obscured by other windows or moved offscreen, even partially, while being used to draw the image.)
After the image has been rendered, the file name and type (format) can be specified, or the save can be canceled. Images can be saved in the PNG, TIFF (LZW-compressed), TIFF-fast (uncompressed), PPM, JPEG, PS (PostScript), and EPS (Encapsulated PostScript) formats. The quality of JPEG files can be specified as an integer in the range 5-95, with higher values corresponding to higher quality and larger files.
Additional format options for stereo pairs are stereo JPEG (*.jps) and stereo PNG (*.pns). Viewing such files as standard JPEG and PNG shows side-by-side images, but special viewers are available to show them as stereo. Free viewers include StereoPhoto Maker, JPSViewer, and NVidia's consumer 3D stereo driver (requires the appropriate graphics hardware).
The tutorials include step-by-step examples of preparing images in Chimera.
Many display styles and colors are available.
Presets are predefined combinations of display settings. A preset can be applied by choosing it from the Presets menu or by using the preset command. Further changes can be made after a preset has been applied. Publication presets make the background white, adjust display styles, and increase smoothness.
Background color can be changed using the menu (Actions... Color), the set command, (for example, set bg_color white), or the Background preferences. If system hardware permits, background transparency can be enabled with the Effects tool. Images saved with a transparent background are easier to composite with different backgrounds in image-editing applications.
Depth cueing is progressive shading from front to back, also known as fog. Depth cueing can be adjusted by moving the global clipping planes or by changing associated parameters including the shading color with the Effects tool or the set command. The depth-cueing color can also be changed using the menu (Actions... Color).
Clipping planes cut away portions of structures, surfaces and objects. The global clipping planes shown in the Side View affect all models and can only be perpendicular to the line of sight. In addition, each model can have a per-model clipping plane oriented at any angle. Surface Capping controls whether clipped surfaces appear solid or hollow.
Silhouette edges are outlines that emphasize borders and discontinuities. Although shown in the interactive display, these are mainly intended for output images (supersampling makes them look much smoother in the image than on the screen). Silhouette edges and their thickness and color can be controlled with the Effects tool or the set command.
Smoothness can be increased by increasing the pixel dimensions of an image (its resolution). Additionally, independent of resolution:
Transparency. Surfaces and other items can be transparent.
Shadows. Shadowed images can be produced by raytracing with POV-Ray. Not all of the image tips apply to raytracing: some aspects of a Chimera scene are not handled (see raytracing limitations), and some raytracing parameters are controlled independently of the Chimera scene in the POV-Ray Options preferences. The supersmooth style of ribbon is recommended for use with raytracing. Shadowed images can also be generated with conic or neon (the latter is not available on Windows).
Labels. Labels suitable for publication images can be added with the 2D Labels tool (or command 2dlabels). 2D labels containing arbitrary text and/or symbols can be placed or dragged anywhere in the plane of the graphics window, and each can be colored and sized independently. Standard Chimera labels (those shown with the Actions... Label menu or the commands label and rlabel) can be repositioned with the mouse, but their size can only be adjusted collectively in the Background preferences.
Color Keys. Color keys suitable for publication images can be created with the Color Key tool. A color key shows how colors relate to quantities. Such coloring schemes are applied by various tools, including Render by Attribute.
Stereo. Wall-eye, cross-eye, and red-cyan stereo images can be saved by changing the graphics window to the corresponding camera mode with the Camera tool (or the command stereo) and using the same as screen Image camera mode in the Save Image dialog. Another way to save cross-eye stereo images is with the stereo pair Image camera mode; in that case, it does not matter what camera mode is being used in the graphics window, but the resulting image will be twice as wide as the specified size.
Color space. Some publications require images to be in the CMYK color space. Chimera currently saves images in only the RGB color space, so a separate application such as Adobe Photoshop® must be used to switch between the two.
Several factors should be considered in color scheme design, including
what the colors are meant to indicate, how the data are presented,
and whether viewers may have color vision deficiencies.
Useful Web sites include:
(Note: hex color specifications at this site can be converted to Tk codes by replacing the leading 0x with #; for example, hex 0xe78ac3 corresponds to Tk code #e78ac3. In Chimera, Tk codes can be used in coloring commands and entered into the Color Editor.)
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948900.50/warc/CC-MAIN-20230328232645-20230329022645-00327.warc.gz
|
CC-MAIN-2023-14
| 7,087
| 27
|
https://www.doubleclick.com.eg/en-us/General-Ledger
|
code
|
Adding the opening balances of accounts and cost centers with the beginning of work on the system appears when extracting reports.
Adding the estimated monthly values for each account to compare with the actual values when extracting budget reports.
Adding the estimated monthly values for each cost center to compare with the actual values when extracting budget reports.
Complete flexibility when reviewing entries provided by a record of unreleased daily entries.
Manual distribution of the value of one party in the accounting entry to more than one cost center in proportions or in values.
Find limitations in many and varied ways.
Dealing with more than one accounting period at the same time without being restricted to closing the previous period.
Automated compilation of all journal entries generated by other subprogram documents.
Adding an optional book number (cannot be repeated) in addition to the automatic serial number for flexibility of review and matching between the book documents and the program.
An automated explanation that appears in the entries resulting from the sub-programs, indicating the type and number of the document resulting from the entry.
Adding all foreign currencies used in foreign transactions with fixing the exchange rate or changing it with each document according to the daily exchange rate.
The possibility of making an evaluation of the currency differences, and the evaluation date and the currency to be evaluated can be determined.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474795.48/warc/CC-MAIN-20240229071243-20240229101243-00065.warc.gz
|
CC-MAIN-2024-10
| 1,484
| 12
|
http://snarkmarket.com/2009/3250
|
code
|
I just subscribed to Jamais Cascio’s future-y blog on Fast Company and in the subhed of the Google Reader subscribe page, it said:
That’s just a bit of exposed CMS-speak, but hmm. It seems resonant somehow.
Hari Seldon, speaking to students across a glowing touch-table covered with flickering blue-green graphs: “But to predict future events, you must apply the taxonomy view.” He swipes his thumb and the graphs rotate.
A student pipes up. Linus, the eager one: “But at what depth, Master Seldon? Three-hundred? Three-thousand?”
“No, no, no.” Seldon smiles. “The depth… is zero.”
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998250.13/warc/CC-MAIN-20190616142725-20190616164559-00064.warc.gz
|
CC-MAIN-2019-26
| 603
| 5
|
http://www.typologycentral.com/forums/the-sj-guardhouse-esfj-isfj-estj-istj-/59811-am-hates-evasive-answers.html
|
code
|
I'm posting this to my fellow SJs as I'm curious if this may be an Si thing or if its just me, but...
Am I the only one who really hates it when someone can't give you a straight answer? For example, when I ask a friend of mine, "How'd your night go?" And the conversation follows as:
Them: "was good! We stayed out late, I only got a few hours of sleep."
Me: "why, what happened?"
Them: "what do you mean?"
Me: "why were you out so late?"
Them: "we were out late because we were out late. Are you mad at me or something?"
Now see, at this point, I don't really care a whole lot about why they were out so late, I'm just curious at that point as to why it is they're giving me such non descript answers. It's like an alarm bell goes off in my head and I start to wonder, "what's up with this? Why are they answering me in this fashion?" And I become intensely curious.
Apparently this offends some people. Am I the only one who gets like this? I this an Si or perhaps an inferior Ne thing?
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718034.77/warc/CC-MAIN-20161020183838-00054-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 989
| 9
|
http://popular-traditional-argentina-food.com/empanadas1.html
|
code
|
Fried Empanadas (Step 2 of 3)
1) Making the Dough - 2) Making the Filling - 3) Finishing the Empanada
Make the Filling:
1 pound of lean ground beef
2 large onions, diced
2 scallion onions, diced
1 tablespoon paprika
3 hard boiled eggs
green olives (if desired)
Add salt to taste (Argentina food tends to use more salt)
1/2 cup of boiling water or broth
- Cook the onions until they turn brown.
- Add the ground beef to the onions, stir until it changes color.
- Add diced scallions, paprika and boiling (broth or water).
- Stir until the food forms a thick mixture.
- Remove from fire and let cool.
- Add chopped green olives, and thinly diced hard boiled eggs.
Now stuff the empanadas
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887692.13/warc/CC-MAIN-20180119010338-20180119030338-00261.warc.gz
|
CC-MAIN-2018-05
| 685
| 18
|
https://www.hullegalaxytabs.com/step-by-step-to-buy-product-key-for-windows-10-os.htm
|
code
|
Windows 10 can be installed just by downloading the operating system. Here are few steps that you need to follow for buying windows 10 product key.
- Get to official site of Microsoft to download and install the OS
- You can choose to add the product key in cart if you prefer to buy later.
- In case of choosing to buy the key, download it from the link.
- In the process of downloading you have to make the payment and insert the details to buy lots of offer.
Can the product key obtained from random site?
Apart from Microsoft, product key can be obtained from few other sources. In kind of sources they may provide only the key to activate. In rare case they provide the software along with activation code. The Microsoft gives authorization for particular sites that will lead to particular sites to buy product key.
What will be the price?
The price of windows 10 pro product key changes from day to day and according to the working features the lifetime and working, price differs. Whenever you install the operating system software, you need to enter the product key to activate it.
Is it necessary?
Yes to access the wide features within the software, product key is essential. Also we need to consider about the full access along with security options. Thus it is important know that product key can be used only with one system and use of multiple system will lead to unauthorized access. The system will be dropped to the product failure.
How to buy at lowest price?
As we discussed before, there are many sites authorized by Microsoft in providing product key. These sites will offer you a key at affordable rate. You need to compare all the sites before finalizing the one site to buy. It is recommended to buy from the top sites to avoid scam.
Alternative to buying a key
There are lots of alternatives to activate windows product key. Those things should be kept in mind while you discuss about the activation process and procedure. There are lots of fantastic feedbacks that will make people to expect from the activation of product key.
Operating system is the source to get into the system where it acts as an interface between the hardware and user. Operating system should be maintained well to keep data secured and safe. People cannot also choose an open source OS but it is not user friendly as this Microsoft OS. Since Microsoft is updating with the versions, look for the windows 10 OS. It will help in considering the facts and you can choose a system that works well with the system by giving access to all the features.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474948.91/warc/CC-MAIN-20240301030138-20240301060138-00837.warc.gz
|
CC-MAIN-2024-10
| 2,548
| 16
|
http://www.astroexplorer.org/details/apjlac9389f1
|
code
|
Multiband imaging of IIZw096. (a) HST/Advanced Camera for Surveys (ACS) F435W (0.4 μm) image showing the entire IIZw096 system with Spitzer/MIPS 24 μm contours in orange. The FOV of the JWST/MIRI SUB128 subarray is centered on the dust-obscured region (gray dotted box). The white box indicates the region presented in the rest of the panels. (b) A zoomed-in image of panel (a) showing the obscured region. The red plus symbols are a subset of the sources identified by Wu et al. (2022). (c) HST/NICMOS F160W (1.6 μm) image of the obscured region. The region and source names from Goldader et al. (1997) and Wu et al. (2022), respectively, are shown. (d)–(f) JWST/MIRI SUB128 images taken with the F560W (5.6 μm), F770W (7.7 μm), and F1500W (15 μm) filters. The MIRI images are shown with a logarithmic scale. The red circles indicate the locations of the detected sources with a size corresponding to the beam FWHM. Gray circles indicate sources detected in F560W and F770W but not confidently detected at F1500W. The red plus symbols are the sources detected with HST as shown in panels (b) and (c). PSF features are visible extending outwards from ID 8 in panel (f) because it is compact and bright. All images are shown with north up and east to the left. These images show that while the complexity of IIZw096 was evident from the near-IR HST data, the true nature of the dust emission and the source of the power is only finally revealed with JWST.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710534.53/warc/CC-MAIN-20221128171516-20221128201516-00281.warc.gz
|
CC-MAIN-2022-49
| 1,462
| 1
|
https://www.mineplex.com/threads/why-is-there-so-little-winner-experience-now.215233/#post-745675
|
code
|
I remember that when I played skywar a few months ago, the winner could get 1500-2100 experience points, but when I won in skywar today, I only got 700-800 experience points. This makes my upgrade very slow, I don't want to do this! ! ! I want to change the ratio of winning experience points back.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00536.warc.gz
|
CC-MAIN-2021-43
| 298
| 1
|
https://www.libhunt.com/l/c-sharp/topic/crawler
|
code
|
Top 6 C# Crawler Projects
Cross Platform C# web crawler framework built for speed and flexibility. Please star this project! +1.Project mention: Can you build a web crawler in c#? | reddit.com/r/learnprogramming | 2022-04-11
This can be done perfectly well in c#, https://github.com/sjdirect/abot for example.
A Tumblr, Twitter and newTumbl Blog Backup ApplicationProject mention: Twitter archive not working? | reddit.com/r/Twitter | 2022-11-18
There's a github project called TumblThree that can pull tweets and media that have been publicly posted to any account you give it. It can't do private accounts or DMs and it doesn't pull quote tweets, but it gives you a backup of everything publicly available.
Clean code begins in your IDE with SonarLint. Up your coding game and discover issues early. SonarLint is a free plugin that helps you find & fix bugs and security issues from the moment you start writing code. Install from your favorite IDE marketplace today.
A simple but powerful web crawler library for .NET
🌌 High productivity semi-automatic crawler generator 🛠️🧰Project mention: I made a powerful crawler creation tool on c#! | reddit.com/r/csharp | 2022-11-10
A multi threaded web crawler library that is generic enough to allow different engines to be swapped in.Project mention: I need data from a website. It is viable to create an API that scrapes the website and returns the data on an endpoint? | reddit.com/r/dotnet | 2022-12-20
Didn't get a chance to reply earlier but depending on what you're trying to do, you might want a web crawler. I have a crawler on Github that I built for scraping in instances where someone doesn't have an API. If you go this route, I suggest doing it as a background task and go off cached data.
A simplistic web scraper which aims to make scrapping webpages and crawling links faster by utilizing parallel threading. (by 0x78f1935)Project mention: CLI Scraper | reddit.com/r/csharp | 2022-11-27
C# Crawler related posts
I made a powerful crawler creation tool on c#!
2 projects | reddit.com/r/csharp | 10 Nov 2022
Download videos in stream
2 projects | reddit.com/r/freesoftware | 12 Aug 2022
Recursion needed in small crawler
1 project | reddit.com/r/csharp | 15 May 2021
What are some of the best open-source Crawler projects in C#? This list will help you:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943845.78/warc/CC-MAIN-20230322145537-20230322175537-00567.warc.gz
|
CC-MAIN-2023-14
| 2,324
| 19
|
https://bmcgenomics.biomedcentral.com/articles/10.1186/s12864-016-3434-3
|
code
|
Modular combinatorial binding among human trans-acting factors reveals direct and indirect factor binding
BMC Genomics volume 18, Article number: 45 (2017)
The combinatorial binding of trans-acting factors (TFs) to the DNA is critical to the spatial and temporal specificity of gene regulation. For certain regulatory regions, more than one regulatory module (set of TFs that bind together) are combined to achieve context-specific gene regulation. However, previous approaches are limited to either pairwise TF co-association analysis or assuming that only one module is used in each regulatory region.
We present a new computational approach that models the modular organization of TF combinatorial binding. Our method learns compact and coherent regulatory modules from in vivo binding data using a topic model. We found that the binding of 115 TFs in K562 cells can be organized into 49 interpretable modules. Furthermore, we found that tens of thousands of regulatory regions use multiple modules, a structure that cannot be observed with previous hard clustering based methods. The modules discovered recapitulate many published protein-protein physical interactions, have consistent functional annotations of chromatin states, and uncover context specific co-binding such as gene proximal binding of NFY + FOS + SP and distal binding of NFY + FOS + USF. For certain TFs, the co-binding partners of direct binding (motif present) differs from those of indirect binding (motif absent); the distinct set of co-binding partners can predict whether the TF binds directly or indirectly with up to 95% accuracy. Joint analysis across two cell types reveals both cell-type-specific and shared regulatory modules.
Our results provide comprehensive cell-type-specific combinatorial binding maps and suggest a modular organization of combinatorial binding.
The combinatorial binding of trans-acting factors (TFs) is an important basis for the spatial and temporal specificity of gene regulation [1–4]. Combinations of TFs have been shown to regulate gene expression stripes in the Drosophila embryo , to generate cell-type-specific signaling responses [6, 7], and to program cell fates . In this paper we call each distinct set of TFs that bind together to the same regulatory regions a regulatory module.
Understanding the interplay among regulatory modules is essential to dissect the complexity of gene regulation. Previous studies have found that TFs tend to bind in clusters, which are typically characterized by a large number of TF binding sites in a regulatory region [9–12]. These co-binding TFs may belong to different functional modules that can be combined in regulatory regions to achieve specific functions. For example, gene regulation is initiated by the interaction of enhancer-bound TFs, promoter-bound TFs, and TFs that bring the enhancers and promoters together in three dimensions, such as mediator, CTCF, and cohesin [13–15]. Therefore, CTCF/cohesin modules may co-occur with enhancer-related modules, promoter-related modules, or both. Such module co-occurrences suggest that combinatorial binding of TFs may be organized in a modular hierarchy: a regulatory region may use a combination of multiple regulatory modules, which are, in turn, combinations of multiple TFs.
However, previous methods for studying combinatorial binding do not consider the modular organization of TF co-binding. Early work in discovering TF co-binding was limited to lower organisms or to the computational prediction of motif sites [16–19]. The systematic discovery of regulatory modules in humans has recently become possible with large-scale efforts such as the ENCODE project to comprehensively profile the in vivo binding of tens to hundreds of TFs in multiple human cell types [20, 21]. Initial analyses of the ENCODE data were limited to either pairwise TF co-binding or TF co-binding in the genomic regions bound by a particular TF and thus did not allow comprehensive discovery of higher order combinatorial binding. Hard-clustering-based methods have been applied to genome-wide binding data. For example, self-organizing maps (SOMs) have been used to explore and visualize the colocalization patterns of TFs and k-means clustering has been used to characterize the combinatorial regulation of erythroid enhancers . These methods model TF binding at a given region with a single module and consequently require a large number of modules to fully represent the complexity of TF combinatorial binding. For example, Xie et al. applied SOM to a dataset from K562 cells and estimated the optimal number of neurons (or modules) for the resulting SOM to be 2,852 . Non-negative matrix factorization (NMF), a soft clustering method, has also been applied to infer TF interactions . However, this work did not explicitly explore the issue of multiple module usage and predicted only a small number of TF combinations . It notably failed to capture the well-studied CTCF/cohesin interaction [14, 24]. Therefore, we have found that existing methods are not suitable for modeling the modular structure of multiple regulatory modules in the same regulatory regions.
The need to model the modular organization of regulatory modules motivated us to use a probabilistic topic model that can represent modular TF co-binding in regulatory regions. Topic models have been widely used to discover thematic structures in a large corpus of documents [25, 26]. A topic model decomposes documents into a set of all shared topics, where a topic is a set of words that co-occur in multiple documents. The factoring of a document into multiple topics permits the discovery of compact and coherent topics that can be combined to accurately represent a document. This factoring results in better performance in predicting held-out data than mixture models , which are hard-clustering methods that force a document to be described by a single topic. Topic modeling has been used to discover gene expression programs [27, 28] and microRNA regulatory modules , but it has not yet been applied to study TF combinatorial binding.
Regulatory Module Discovery (RMD) applies a topic model to systematically discover regulatory modules using a large compendium of in vivo TF binding data. We show that RMD discovers more compact and comprehensive modules than other methods. Applying RMD to data from human K562 cells, we discovered diverse sets of regulatory modules and found that tens of thousands of regulatory regions use multiple modules in a modular manner. We found that, for certain TFs, direct (motif present) and indirect (motif absent) binding of the TF associates with distinct sets of co-binding partners. Finally, our analysis discovered cell-type-specific modules and shared modules, and that a given regulatory region can utilize different modules in different cell types. Overall, our results provide comprehensive cell-type-specific global maps of regulatory modules and suggest a modular organization of TF combinatorial binding in regulatory regions.
Results and discussion
Regulatory module discovery (RMD) discovers more compact and comprehensive regulatory modules than other methods
RMD discovers regulatory modules given binding data for a large set of regulatory regions. RMD is based on Hierarchical Dirichlet Processes , a Bayesian non-parametric topic model that automatically determines the number of modules based on the complexity of the observed data. To use conventional document topic model terminology, regulatory regions are “documents,” TF binding sites are “words,” and regulatory modules are “topics.” As in the document model, a regulatory region may utilize one or more modules, and a TF may participate in multiple modules.
The co-binding of TFs in regulatory regions across the genome can be represented without loss by a region-TF matrix. In the work below, a full region-TF matrix would be of size ~140,000 regions by 115 TFs and is difficult to directly interpret. Using topic modeling, a large region-TF matrix is summarized into a compact module-TF matrix (49 modules × 115 TFs) and a TF site assignment table (Fig. 1a). The module-TF matrix describes the set of regulatory modules discovered, with each module represented as a probability distribution over all the TFs. The assignment table assigns each TF binding site from each region to one of the modules. The assignment table can be further summarized into a region-module matrix (~140,000 regions × 49 modules) that describes which modules each regulatory region uses.
We first tested the ability of RMD, k-means clustering, and NMF to accurately capture TF-TF correlations that are present in the binding data. We used a compendium of ChIP-seq data from 115 TFs in human K562 cells , and then pooled and merged all the TF binding sites into ~140,000 non-overlapping co-binding regions, each of which was required to contain at least 3 TF binding sites. We applied the three methods to these ~140,000 co-binding regions and constrained them to discover the same number of modules (k = 49). Then for each method, we evaluated how well the pairwise TF correlation scores from the module-TF matrix correlate to those from the original region-TF matrix. We found that the topic model modules more accurately recapitulate the pairwise TF co-binding relationship in the original data (r = 0.81) than the modules learned from the other two methods (r = 0.74 for k-means clustering, r = 0.30 for NMF) (Fig. 1b). When the number of modules k is increased to 100, the performance of k-means clustering and NMF improves to r = 0.80 and r = 0.59, respectively (Additional file 1: Figure S1a). In addition, we observed that the k-means modules tend to be more similar to each other than those from RMD and those from the original binding data (Fig. 1b). Because hard-clustering-based methods do not factor complex binding regions into multiple modules, they generally generate more similar modules and need more modules to represent the structure in the data than a topic model. However, the increase in module count makes the interpretation of the modules harder. In the limit, increasing k to the total number of regions will exactly recapitulate the original binding data but will not permit common patterns of co-binding to be observed.
Given that RMD needs fewer modules to represent the binding data than k-means clustering, we then asked whether the RMD modules were sufficient to represent the clusters produced by k-means (k = 49 and k = 100). For this analysis, we considered an RMD module to match a k-means cluster and vice-versa if the Pearson correlation between the module/cluster vectors is greater than 0.5. We found that all the k-means clusters (k = 49) are matched by RMD modules, while 10 RMD modules are not matched by any k-means cluster for k = 49 (Fig. 1c). These unmatched RMD modules are used in small number of regions and often co-occur with other more widely used modules. For example, Module 4 (STAT1 + STAT2 + STAT5A) binds in 960 regulatory regions that are highly enriched with genes involved in immune response (FDR q-value = 2.2E-14) and Interferon alpha/beta signaling (FDR q-value = 2.2E-14), as shown by a GREAT analysis of annotation enrichment . Two thirds of these regions also use other modules such as enhancer or promoter modules. Other modules that are not identified by k-means include co-binding of ZBTB33 and promoter associated factors (Module 1), and of Pol3 + BDP1 + ATF3 (Module 17 and 41). Even when k is 100 for k-means, 6 RMD modules are not matched by any k-means cluster, while all the k-means clusters are matched by RMD modules (Additional file 1: Figure S1b). These comparison results are robust across different correlation cutoff values for matching the modules (Additional file 1: Figure S1c). The unmatched modules are missed by k-means clustering likely because they are over-shadowed by the other more widely used modules in the same regions.
In summary, our analysis shows that RMD is better at decomposing complex binding regions into a combination of specific modules and learning a set of more accurate, compact, and comprehensive modules than hard clustering and matrix factorization approaches.
A global map of regulatory modules in human cells
Applying RMD to the K562 dataset, we discovered a global combinatorial binding map consisting of 49 regulatory modules (Fig. 2 and Additional file 2: Tables S1-S2). We also applied RMD to data from GM12878 cells (86 TFs) and discovered 49 modules (Additional file 1: Figure S2 and Additional file 2: Tables S3-S4).
We found that the discovered modules are easy to interpret and reveal coherent functional groups of co-binding TFs. The modules discovered from K562 cells capture known sets of factors that interact with each other or function as a complex, such as the following:
the master regulators GATA1, GATA2 and TAL1 ; and the enhancer-binding co-activator p300
Pol3 transcriptional machinery
AP-1 factors such as JUN/JUNB/JUND/FOS/FOSL
SPI1 (also known as PU.1) and ELF1
To further evaluate whether the discovered modules were consistent with known TF interactions, we compared TF co-occurrences in these modules with published in vivo protein-protein interactions assayed by antibody immunoprecipitation and mass spectrometry (IP-MS) in K562 cells . Because of the limited coverage of the IP-MS dataset, we were not able to evaluate the specificity of the TF combinations we discovered. Similar to the original study , we evaluated the sensitivity in recovering the IP-MS interactions using TF-TF associations discovered by RMD. Among the 115 TFs we studied, 33 physical protein-protein interactions were identified in the published IP-MS dataset, 22 of these TF interactions are captured as TF combinations in the K562 regulatory modules (p value < 0.05) (Additional file 3: Table S7). The fraction of overlap is similar to the original study, which discovered TF co-binding as 2,852 SOM neurons (i.e. modules) . Thus, RMD is able to recapitulate previous findings with a much smaller number of interpretable modules.
We then examined whether the discovered regulatory modules were consistent with the chromatin states of the regulatory regions that use them. We annotated regulatory regions by DNase hyper-sensitivity, histone modifications, and the genome segmentation annotations derived from them [20, 42]. For the regulatory regions that utilize the same modules, we computed the fraction of the regulatory regions that overlap with these annotations. We found that for the regions using the same modules, the chromatin states of the regions are consistent with the functions of the TFs participating in the modules (Fig. 3a). For example, modules with master regulators of K562 cells and co-activator/co-repressors GATA1 + GATA2 + TAL1 + p300 + RCOR1 + TEAD4 are used in regions that are annotated with enhancer chromatin state and are enriched with H3K4me1 histone modification, while the Pol2/promoter modules are used in the regions that are annotated with TSS (transcription start site) chromatin state and are enriched with promoter-associated histone modifications such as H3K4me3, H3K9ac, and H3K27ac. Most regulatory modules are used by regulatory regions that are DNase hypersensitive, which may be explained by the preferential binding of TFs in open chromatin. Consistent with a previous study , the modules used in the non-DNase hypersensitive regions are pre-dominantly repressive modules with heterochromatin-bound factors, such as Module 48 (the combination of ZNF274, ZNF143, TRIM28, CBX3, SETDB1, and other factors) and Module 7 (the combination of ZNF143, TRIM28, CBX3 and SETDB1, but not ZNF274).
To further characterize the potential functions of the regulatory modules we discovered, we performed systematic gene ontology (GO) analysis using GREAT on the genes proximal to the regions that use the same modules. We clustered the enriched GO terms based on the FDR q-values of the enrichment across the 49 modules. We found that specific GO terms are enriched in the regions using specific modules (Additional file 1: Figure S3). For example, housekeeping functions are enriched in promoter modules; regulatory functions and cell-type-specific terms such as “hematopoietic or lymphoid organ development” are enriched in both the enhancer and AP1 modules.
To facilitate interpretation, we further clustered modules that are driven by similar sets of TFs into 23 module groups. The modules in the same group share the same set of main TFs, yet differ in some specific minor TFs. For example, in the CTCF group, modules 29 and 35 both contain the TFs CTCF, RAD21, SMC3, and ZNF143; Module 29 includes ARID3A and CEBPB, while Module 35 includes MAX, ZBTB7A, MYC, and YY1.
RMD is able to discover widely used regulatory modules as well as very specific modules. For example, the promoter, enhancer, and CTCF modules are each used in more than 10,000 regulatory regions. At the same time, a small and specific module, ZNF274 + TRIM28 + SETDB1 (Module 48), which has been shown to specifically bind at the 3′ ends of zinc finger genes and suppress their expression , is used in only 63 regions.
To reveal the structure and importance of the discovered regulatory modules, we applied principal component analysis (PCA) to the module-TF matrix. We found that modules are reduced to principal components (PCs) that correspond well with the major module groups (Fig. 3b-d): the first PC is contributed mainly by the CTCF modules, explaining 35% of the total variance; the second PC is contributed mainly by the Pol2/promoter modules, explaining 18% of the total variance; the third and fourth PCs are contributed mainly by the p300/enhancer and AP-1 modules, explaining 16 and 10% of the total variance, respectively. In total, the first four PCs account for 79% of the total variance. These results indicate the dominant roles in genome-wide DNA binding of CTCF and cohesin, which have been suggested as key participants in shaping the three-dimensional genome structure , followed by promoter-binding factors and enhancer-binding factors. We found that the relative influence of the modules is not correlated with the number of factors in the modules because fewer factors contribute to the CTCF modules than to the promoter or enhancer modules.
Regulatory module analysis identified distinct binding partners of NFY in different regions
In addition to identifying the global trends in the regulatory modules, the combinatorial patterns for certain groups of TFs also generate hypotheses about TF interactions. The modules we discovered reveal context-dependent co-binding as reported in previous work. For example, we found that FOS mainly participates in five modules (Additional file 1: Figure S4), which recapitulate four categories of FOS co-localization patterns reported previously . Although the fifth FOS category “AP1-HOT” does not directly correspond to a single module, it can be factored into AP1 and promoter modules that are mixed in those regulatory regions. In addition, we found that the category of FOS + NFYB can be further divided into FOS + NFYB + SP2 (Module 40) and FOS + NFYB + USF (Module 32), which we discuss in more details below.
NFYA and NFYB (two subunits of NFY) both participate in Modules 32 and 40, together with the co-binding partner FOS [21, 44]. However, in Module 32, NFY and FOS co-associate with USF1, USF2, ATF3, and MAX, while in Module 40, they co-associate with SP1 and SP2 (Fig. 4a). We verified that these two modules are used predominantly in different regions. More specifically, we found 1227 regions that are bound by both NFY and USF2 but not by SP2, 1758 regions that are bound by both NFY and SP2 but not by USF2, and only 120 regions that are bound by NFY, and both SP2 and USF2 (Fig. 4b). The majority of the regions using the NFY + SP module are TSS-proximal regions, while most of the NFY + USF regions are distal regions. Furthermore, NFY and USF2 co-binding exhibits a strong spacing constraint, with 1227 co-bound regions exhibiting a 21-22 bp spacing between NFY and USF2 motif-supported sites. On the other hand, NFY and SP2 co-binding does not appear to have a specific spacing constraint. Enrichment analysis using GREAT shows that: the NFY + USF bound regions are near genes with ontologies such as “nuclear estrogen receptor alpha network pathway” (FDR q-value = 7.1E-5), “steroid hormone receptor binding” (FDR q-value = 2.1E-4) and “Homeobox protein, antennapedia type, protein family” (FDR q-value = 4.0E-4); while the NFY + SP bound regions are near genes with ontologies such as “Krüppel-associated box protein family” (FDR q-value = 2.3E-11), “positive regulation of nuclease activity” (FDR q-value = 1.6E-3) and “cholesterol biosynthesis” (FDR q-value = 3.5E-3). Taken together, the regulatory module analysis identified that NFY and FOS bind with distinct combination of factors in different types of regulatory regions that may regulate genes with distinct functions. NFY and SP1 have been shown to bind to promoters of a number of genes synergistically [45–47] or competitively . NFY and USF co-bind as a complex to the promoter of HOXB4 gene in hematopoietic cells . Here we find extensive NFY + SP and NFY + USF context-specific co-bindings occur in a mutually exclusive manner in thousands regions, suggesting the possibility of different DNA binding modes for NFY and different consequences of gene regulation. These results highlight that systematically discovered regulatory modules may be used to generate specific hypothesis that can be tested with more detailed analysis.
The combinatorial rules of direct versus indirect binding
We next compared the co-binding partners of TFs when they bind directly (motif present) and indirectly (motif absent) to the genome. We define direct binding as the binding of a TF at sites that contain the cognate motif of the TF, and indirect binding as binding at sites that do not contain a detectable cognate motif. Previous studies have found that many ChIP-seq binding sites of sequence-specific TFs do not contain the cognate motif of the TFs, suggesting that binding may be indirect through the interaction with co-binding TFs [50, 51]. Understanding the combinatorial patterns of direct binding versus indirect binding may reveal the co-binding TFs that facilitate indirect binding.
For each sequence-specific factor X, the binding sites were divided into two groups: dX sites (direct binding) where X motif is present; iX sites (indirect binding) where X motif is not present. For example, CTCF sites were divided into dCTCF and iCTCF sites. These two groups were then treated as binding sites of distinct TFs. For the K562 dataset, this motif-based division expanded the total number of the TFs to 167, with 52 pairs of direct and indirect binding “factors” and 63 non-sequence-specific factors. Applying RMD to these data, we discovered 54 modules with the expanded set of direct and indirect factors (Additional file 1: Figure S5 and Additional file 2: Tables S5-S6). We then investigated the presence of co-binding factors that are specific to direct or indirect binding. For example, previous work reported that FOS co-localizes with NFYB . Our analysis showed that this co-localization mostly occurs between indirect FOS binding and both direct and indirect NFYB binding (Modules 20 and 24) (Fig. 5a). To more systematically investigate the indirect binding of factors, we compute the pairwise correlation between all the 52 direct binding factors and all the 52 indirect binding factors across their module participation profiles (Fig. 5b). A high correlation means similar module participation, indicating that the indirect binding factor likely associates with the corresponding direct binding factor. We found several groups of such associations: indirect binding of FOS with direct binding of NFY + SP1 + SP2, indirect binding of E2F6 with direct binding of MYC + MAX + BHLHE40 + USF + MXI1 + YY1, and indirect binding of ATF3 with FOS + FOSL1 + JUNB + JUND + JUN.
Furthermore, we observed that the direct binding sites and indirect binding sites of some factors participate in very different modules. For example, direct FOS binding co-occurs with AP-1 factors (Module 29) and MAF + BACH1 + NFE2 (Module 27), while indirect FOS binding co-occurs with NFY + SP1 + SP2 (Module 24) or NFY + USF1 + USF2 (Module 20) (Fig. 5a).
With the observation that direct and indirect binding sites of some factors associate with different combination of factors, we reasoned that it would then be possible to predict whether a sequence-specific TF binds DNA directly or indirectly based on the proximal binding of other TFs. To test this hypothesis, we trained a random forest classifier to predict whether a TF binding site is a direct or indirect site using the proximal binding of other TFs in the region. We quantify the difference between the direct and indirect binding partners of a TF by introducing a TF diversification score, which is defined as the Pearson correlation distance between direct and indirect binding modules of the TF. For factors with high TF diversification scores, such as FOS, JUN, JUNB, JUND, MYC, SRF, USF1, and MXI1, the classifier predicted the direct/indirect binding of the factors with 80-95% accuracy (Fig. 5c and Additional file 4: Table S8). Furthermore, the ability to accurately predict direct and indirect binding of a TF is highly correlated to the TF diversification scores (Pearson correlation r = 0.80). The higher the TF diversification score, the higher the prediction accuracy of the random forest classifier (Fig. 5c). These results confirm that the direct and indirect binding of certain TFs can be explained by the specific combination of co-binding factors.
Many regulatory regions use more than one module
To understand the interplay among regulatory modules in regulatory regions, we investigated the extent of multiple-module usage by regions. The multiple-module usage is a structure that cannot be discovered by previous hard-clustering-based approaches, but can be revealed by RMD. We found that 25,107 regulatory regions (~18%) use more than one module (Fig. 6a). For example, we found 3,742 regions that use both enhancer and AP-1 modules (Fig. 6b), 3,071 regions that use both promoter and CTCF modules, and 2,514 regions that use both promoter and enhancer modules (Additional file 1: Figure S6). Notably, more than 63% the regions that use both enhancer and promoter modules are marked by both H3K4me1 (enhancer-related) and H3K4me3 (promoter-related) histone modifications (Additional file 1: Figure S6), but they are annotated as either TSS/promoter or enhancer/weak enhancer chromatin states . This represents a limitation of genome annotation methods that only assign a single label to a genome segment. Furthermore, a module may co-occur with different other modules in distinct types of regulatory regions. For example, AP-1 modules co-occur with enhancer modules in the regions annotated with strong/weak enhancer chromatin states , co-occur with CTCF modules in regions mainly annotated with CTCF state, and co-occur with promoter modules in regions mainly annotated with TSS/promoter state (Fig. 6b). These different type of regions were also found to associate with different functional categories by a GREAT analysis : the AP-1 and enhancer module co-bound regions are enriched with genes involved in “platelet activation” (FDR q-value = 3.1E-11), “regulation of inflammatory response” (FDR q-value = 4.4E-11), “regulation of translation” (FDR q-value = 6.3E-10), and “myeloid leukocyte activation” (FDR q-value = 1.1E-7); while the AP-1 and promoter module co-bound regions are enriched with genes involved in “viral process” (FDR q-value = 1.4E-12), “protein kinase binding” (FDR q-value = 3.7E-11), and “apoptotic signaling pathway” (FDR q-value = 3.3E-9). These differences in functional enrichment suggest that AP-1 module carries out distinct functions when it is combined with different type of other modules.
To ensure that the discovery of multiple-module usage was not the result of inappropriate merging of proximal regulatory regions, we chose a more conservative inter-site distance, 50 bp, for merging the sites into regions. Furthermore, we studied the positions of the binding sites that are assigned to different regulatory modules. In many cases, the binding sites assigned to distinct modules are spatially mixed. For example, in an 80 bp region on chromosome 1, CTCF, CTCFL and RAD21 sites from a CTCF module are mixed with POL2, E2F6, MAX, EGR1, and other sites from a promoter module (Fig. 6c). Such co-occurrences between CTCF and promoter modules are consistent with previous findings that CTCF mediate long-range DNA-looping interactions between enhancers and promoter , and that ZNF143 binds directly to the promoters and occupies anchors of chromatin interactions connecting promoters with distal enhancers .
In summary, our analysis suggests that multiple-module usage is a prevalent aspect of regulatory activities in the cells and it is revealed by RMD.
Cell-type-specific regulatory modules
We next investigated if we could observe cell-type-specific and cell-type-common regulatory modules. We used ChIP-seq data of 56 TFs that were profiled in both K562 and GM12878 cells . Following a previous approach , we constructed the co-binding regions in each cell type separately (~105,000 regions in K562 and ~91,000 regions in GM12878) and then combined the data from all regions from both cell types for RMD analysis. RMD discovered 48 modules that describe the binding of the 56 factors in K562 and GM12878 cells (Fig. 7a). To aid interpretation of the modules, we clustered them into 16 module groups. Interestingly, the promoter-associated modules and CTCF-associated modules are each clustered into one group that is common to K562 and GM12878, while the enhancer-associated modules are clustered into two cell-type-specific groups.
To investigate the degree of cell-type-specificity of the discovered modules, we computed the fraction of the binding regions that are contributed from the K562 data or from the GM12878 data for each module. We found that some combinations of factors are mainly used in K562 cells and others are mainly used in GM12878 cells even though all of the 56 factors bind regulatory regions in both cell types (Fig. 7a). In particular, many enhancer modules are preferentially used in one cell type. In K562 cells p300 co-binds with JUND, JUN, RCOR1, ATF3, FOS, CEBPB, MAFK, MAX, and MYC (Module group 10), but in GM12878 cells p300 co-binds with MEF2A, SP1, SPI1, BCLAF1, BCL3, and BHLHE40 (Module group 11). Both cell types share other module groups, such as the promoter and CTCF module groups. In the regulatory regions that are bound in K562 or GM12878 cells but not bound in both cell types, cell-type-specific modules are used preferentially by one cell type, while shared modules are used in both cell types (Fig. 7b).
We next investigated if a regulatory region that is bound in both cell types uses cell-type-specific regulatory modules. Out of 50,910 regions that are bound in both cell types, we found that 1,956 regions use a different enhancer module in the two cell types (Fig. 7c). For these 1,959 regions, module group 10 is used in the K562 cells and module group 11 is used in the GM12878 cells. Thus although these regions are bound in both cell types and may act as enhancers, as suggested by the binding of transcriptional co-activator p300, they are bound by cell-type-specific combinations of factors in K562 and GM12878. In addition, we found 1,312 regions that are bound by CTCF module factors in both K562 and GM12878 cells, and also bound by the K562 enhancer module factors in K562 cells, suggesting the usage of K562-specific enhancers in these CTCF/cohesin bound regions. In summary, important differences can exist between the set of regulatory modules that bind the same regulatory regions in distinct cell types.
Gene regulation specificity is orchestrated by the interactions among a complex group of trans-acting factors that we have organized into distinct combinable modules. Previous methods have modeled TF combinatorial binding as pairwise interactions or as a single module at a given regulatory region; they are thus not able to capture the complexity of modular combinatorial binding. We have developed a new approach to summarize high-dimensional binding data into combination of combinatorial binding modules. Our approach can provide important insights into the mechanisms of gene regulation not available with previous hard-clustering-based methods. The modules discovered are easy to interpret individually and as a whole, capturing key aspects of global combinatorial binding patterns and providing a resource for generating new hypotheses for TF interactions.
Our analysis reveals that modular combinatorial binding occur in tens of thousands of regions and that specific combination of modules may regulate distinct functional groups of genes, suggesting that multiple-module usage is a prevalent aspect of regulatory activities in the cells. Modeling TF combinatorial binding as regulatory modules helps to dissect the complexity of combinatorial binding of many TFs into compact and easily interpretable modules. Moreover, such explicit modeling of modular structure helps to uncover specific modules that are combined with other modules and are easy to be missed by previous approaches. With a larger number of additional TFs being assayed by large-scale efforts such as the ENCODE project , we expect that RMD will be useful in revealing the complexity of combinatorial binding in these future data.
Previous work attempted to distinguish between direct and indirect TF in vivo binding by integrating in vivo nucleosome occupancy data and in vitro protein binding microarray experiments , or by using TF binding motifs and DNase-seq footprints . In this work, we discovered that the direct and indirect DNA binding of certain TFs is associated with distinct sets of co-binding partners and that without using motif information the co-binding partners alone can predict whether the TF binds directly or indirectly with high accuracy. In addition, our direct/indirect combinatorial binding maps allow prediction of co-binding TFs that may facilitate the indirect binding of TFs. The direct/indirect binding analysis was conducted with a simplified classification of the binding sites based on whether they contain a detectable cognate motif. Recent studies show that clusters of low-affinity binding sites with degenerate motifs can be functional and that binding sites without consensus motifs may use DNA shape to facilitate the in vivo binding of TFs . Future analyses that take into account the clustering of binding sites and the DNA shape information may gain more insights on the role of combinatorial binding of co-binding factors on TF binding.
Our method is a general method for studying TF combinatorial binding. It can be applied to various cell types and species [11, 12, 55, 56] where sufficient binding data are available. One potential limitation on studying combinatorial TF binding from ChIP-seq data is the relative scarcity of high quality antibodies. To expand RMD combinatorial binding analysis to more TFs or to cell types that do not have sufficient ChIP-seq data, one strategy is to augment or replace ChIP-seq data with TF binding predicted from DNase-seq or ATAC-seq data and TF motif information [59, 60].
Data and preprocessing
ChIP-seq data for the TFs and corresponding controls were downloaded from the ENCODE project website http://hgdownload.cse.ucsc.edu/goldenPath/hg19/encodeDCC/. Fastq files were aligned to hg19 genome with Bowtie version 0.12.7 with options “-q --best --strata -m 1 -p 4 --chunkmbs 1024”. GEM was used to call binding events with default parameters using the aligned reads of TF ChIP-seq experiments and the corresponding control experiments. GEM produces two set of binding site calls for each dataset: GPS binding calls without motif information and GEM binding calls with motif information. The binding calls overlapping with the ENCODE blacklist regions (http://hgdownload.cse.ucsc.edu/goldenPath/hg19/encodeDCC/wgEncodeMapability/wgEncodeDacMapabilityConsensusExcludable.bed.gz) were excluded for this analysis.
Construct co-binding regions
GPS binding calls of all the factors in a given cell type were pooled together to construct the co-binding regions. Each binding call was expanded +/−50 bp from the summit position. Then overlapping binding calls were merged to form non-overlapping co-binding regions. Only co-binding regions with three or more binding calls were used for subsequent analysis. We also performed analyses using regions that have a minimum of 2 or 4 TF sites, the results are similar (data not shown). An alternative binding site expansion distance of 100 bp was tested to construct co-binding regions; it gives similar results. For this paper, the expansion distance of 50 bp was used because the spatial resolution of TF ChIP-seq binding calls is about 30-50 bp and that 50 bp expansion distance is more conservative for analyzing multi-module co-occurrences than the 100 bp distance. From the K562 co-binding regions (n = 142,962), we construct a Region-TF matrix (142,962 × 115) that contains the number of binding sites of each TF in each region. The code for constructing co-binding regions and for generating topic model input files is freely available at http://groups.csail.mit.edu/cgs/gem/rmd/.
The hierarchical Dirichlet processes (HDP) topic model was used in this study because it automatically determines the number of the topics from the data. A C++ implementation of HDP was downloaded from http://www.cs.columbia.edu/~blei/topicmodeling_software.html. The parameters used were “--eta 0.1 --max_iter 2000”. Eta is the hyperparameter for the topic Dirichlet distribution. We tested different eta values (0.01, 0.05, 0.1, 0.5 and 1) and the results were similar. We chose eta to be 0.1 to encode our assumption that each topic contains only a few TFs. The HDP inference procedure typically converged at about 1000 iteration. We ran the HDP with 3 different random seeds for 2000 iterations and used the run that had the highest data likelihood reported by the HDP. The input to the HDP are the TF binding site counts in the co-binding regions. Each region is treated as a document and the TF sites as words in the documents. The output of the HDP includes the module-TF matrix and the module assignment of each TF binding site.
For the module-TF matrix, each column vector (TF participation vector) describes the distribution of the TF binding sites across all the modules, and each row vector (module vector) represents the number of binding sites contributed by each TF to the module. We compute a z-score for each TF vector. A TF is considered to participate a module if the z-score of the TF-module pair is larger than 1. Similarly, we compute a z-score for each module vector. A TF is considered to be a “main driver” of the module if the z-score of the TF-module pair is larger than 1. Each module is labeled with the names of the main TF drivers, which are ranked by their z-scores. To facilitate interpretation of the modules, the module-TF matrix was clustered into module groups using hierarchical clustering with Pearson correlation distance and average linkage. The cutoff distance for clustering is 0.5.
The region-module assignment table assigns each TF binding site from each region to one of the modules. The assignment table was summarized into a region-module matrix where each element of the matrix represents the number of the TF binding sites in a region that are assigned to a particular module. A module is considered as being used in a particular regulatory region if 1) at least three binding sites in the region are assigned to the module and 2) the z-score of the site count for the module in the region is larger than 1.
Comparing HDP with k-means clustering and NMF
To test the ability of RMD, k-means clustering, and NMF to accurately capture TF-TF correlations that are present in the binding data, we applied all three approaches to the same set of K562 cell ChIP-seq binding data to discover the same number of modules (k = 49). We applied k-means clustering and NMF on the K562 Region-TF binding matrix using the MATLAB software (MATLAB and Statistics Toolbox Release 2012a, The MathWorks, Inc., Natick, Massachusetts, United States). For k-means clustering, Euclidean distance is used as the distance metric. The cluster number (i.e. rank for NMF) was set to be k = 49 and k = 100 to compare with the HDP topic model with 49 topics. We refer the k-means clusters, NMF components and HDP topics as the modules. To compare the three approaches, we first computed the pairwise TF correlation scores using the original region-TF matrix, or the module-TF matrices derived from these three methods. Then we computed the correlation between the pairwise TF correlation scores from the original region-TF matrix and those from the three derived module-TF matrices. We also compared topic model modules and k-means clusters by computing their Pearson correlation using the module vectors versus the cluster vectors. The comparison results are robust across different correlation cutoff values for matching the modules (Additional file 1: Figure S1c).
Principal component analysis (PCA)
PCA was performed using the MATLAB software (MATLAB and Statistics Toolbox Release 2012a, The MathWorks, Inc., Natick, Massachusetts, United States) on the module dimension of module-TF matrix.
Protein-protein interaction and epigenomic annotation of regulatory modules
The protein-protein interaction derived from IP-MS Data for K562 cells was downloaded from http://www.cell.com/cms/attachment/2021777707/2041662737/mmc1.xls. For the 33 direct physical interaction pairs that contain the TFs in our study, we considered an interaction as rediscovered by RMD if the two TFs are both the “main drivers” in a same module. The p-value of overlap between IP-MS data and the RMD modules was calculated by fixing the pulled-down TFs while permuting all the partners identified by mass spectrometry and calculating the odds of getting higher overlap with RMD modules. A total of 200 permutations were performed, enabling us to estimate the p value to the level of 0.05.
DNase hyper-sensitive open chromatin peak calls, histone modification peak calls, and the combined genome segmentation annotations were downloaded from http://ftp.ebi.ac.uk/pub/databases/ensembl/encode/integration_data_jan2011/. The co-binding regions were annotated with the epigenomic annotations if they overlap at least 1 bp. For each module, we identified the regulatory regions that use the module and computed the fractions of these regions that overlap with the annotations.
GREAT ontology enrichment analysis
The GREAT ontology enrichment analysis was performed on the GREAT website (http://bejerano.stanford.edu/great/public/html/index.php) with the default “basal plus extension” association rule. BED files of the regions that use the regulatory modules are used as the inputs. Hierarchical clustering was performed to cluster the GO terms on the -log10 (FDR q-value) of GO terms with Pearson correlation distance and average linkage.
Direct versus indirect binding analysis
For the direct versus indirect binding analysis, GEM binding calls were used for sequence-specific binding factors that the GEM motifs can be verified. The positional frequency matrix of the top ranked motif reported by GEM was compared against known motifs of the same factor in the public databases using STAMP , as previously described . For the 52 sequence-specific TFs that a database match for the top motif is found, the GEM binding calls were divided into direct and indirect binding sites based on whether the binding sites contain a motif match of the TF. The direct and indirect binding sites were treated as separate factors for topic modeling analysis. For example, CTCF sites were divided into dCTCF and iCTCF sites. For the non-sequence-specific factors and sequence-specific factors that the top GEM motif does not match the known database motifs of the factor, GPS binding calls were used. All the GEM and GPS binding calls were then pooled together to construct the co-binding regions. In total, 159,204 co-binding regions with binding sites from 167 “factors” were constructed. Applying RMD, we discovered 54 modules. The correlation between direct binding of a TF and indirect binding of another TF (matrix shown in Fig. 5b) were computed using the TF vectors in the module-TF matrix.
Predicting direct/indirect binding using random forest
We trained a random forest (RF) classifier to predict whether a TF binding site is a direct or indirect site using the proximal binding of other TFs in the co-binding region. We used the TreeBagger implementation of RF in the MATLAB software (MATLAB and Bioinformatics Toolbox Release 2015b, The MathWorks, Inc., Natick, Massachusetts, United States). More specifically, using the region-TF matrix (159,204 × 167), we took the rows that contained either direct or indirect binding sites of the TF, used the columns corresponding to the direct or indirect binding of the TF as the prediction target and the rest of the columns (binding of the other TFs) as the features. For each sequence specific co-binding TF, the dTF and iTF columns were combined, ignoring the motif information. Therefore, the prediction is harder because it based only on the identity but not the motif information of the co-binding TFs. For each sequence-specific TF, we trained five RFs, each with a distinct random subset (80%) of the data, and then tested on the rest of 20% data. The prediction accuracy values of the five classifiers are then averaged for each TF. The correlation between direct and indirect binding of a TF (shown in Fig. 5c) were computed using the corresponding TF vectors in the module-TF matrix.
We used ChIP-seq data of 56 TFs that were profiled in both K562 and GM12878 cells . When there were multiple datasets for the same factor, we chose the datasets that were produced by the same lab, using the same antibodies, or had similar number of binding calls. Following a previous approach , the co-binding regions were constructed separately for K562 and GM12878. These data from both cell types were then concatenated for topic model analysis. Thus we can learn TF co-binding relationships that are shared across cell types but still keep track of the cell type origin of the TF sites and the regions.
Hierarchical dirichlet processes
Antibody immunoprecipitation and mass spectrometry
Non-negative matrix factorization
Principal component analysis
Regulatory module discovery
Transcription start site
Georges AB, Benayoun BA, Caburet S, Veitia RA. Generic binding sites, generic DNA-binding domains: where does specific promoter recognition come from? FASEB J. 2010;24:346–56.
Spitz F, Furlong EEM. Transcription factors: from enhancer binding to developmental control. Nat Rev Genet. 2012;13:613–26.
Weingarten-Gabbay S, Segal E. The grammar of transcriptional regulation. Hum Genet. 2014;133:701–11.
Slattery M, Zhou T, Yang L, Dantas Machado AC, Gordân R, Rohs R. Absence of a simple code: how transcription factors read the genome. Trends Biochem Sci. 2014;39:381–99.
Stanojevic D, Small S, Levine M. Regulation of a segmentation stripe by overlapping activators and repressors in the drosophila embryo. Science. 1991;254:1385–7.
Mullen AC, Orlando DA, Newman JJ, Lovén J, Kumar RM, Bilodeau S, et al. Master transcription factors determine cell-type-specific responses to TGF-β signaling. Cell. 2011;147:565–76.
Trompouki E, Bowman TV, Lawton LN, Fan ZP, Wu D-C, DiBiase A, et al. Lineage regulators direct BMP and Wnt pathways to cell-specific programs during differentiation and regeneration. Cell. 2011;147:577–89.
Mazzoni EO, Mahony S, Closser M, Morrison CA, Nedelec S, Williams DJ, et al. Synergistic binding of transcription factors to cell-specific enhancers programs motor neuron identity. Nat Neurosci. 2013;16:1219–27.
Gerstein MB, Kundaje A, Hariharan M, Landt SG, Yan K-K, Cheng C, et al. Architecture of the human regulatory network derived from ENCODE data. Nature. 2012;489:91–100.
Yip KY, Cheng C, Bhardwaj N, Brown JB, Leng J, Kundaje A, et al. Classification of human genomic regions based on experimentally determined binding sites of more than 100 transcription-related factors. Genome Biol. 2012;13:R48.
Gerstein MB, Lu ZJ, Van Nostrand EL, Cheng C, Arshinoff BI, Liu T, et al. Integrative analysis of the Caenorhabditis elegans genome by the modENCODE project. Science. 2010;330:1775–87.
The modENCODE Consortium, Roy S, Ernst J, Kharchenko PV, Kheradpour P, Negre N, et al. Identification of functional elements and regulatory circuits by drosophila modENCODE. Science. 2010;330:1787–97.
Guo Y, Monahan K, Wu H, Gertz J, Varley KE, Li W, et al. CTCF/cohesin-mediated DNA looping is required for protocadherin α promoter choice. PNAS. 2012;109:21081–6.
Phillips-Cremins JE, Sauria MEG, Sanyal A, Gerasimova TI, Lajoie BR, Bell JSK, et al. Architectural protein subclasses shape 3D organization of genomes during lineage commitment. Cell. 2013;153:1281–95.
Kagey MH, Newman JJ, Bilodeau S, Zhan Y, Orlando DA, van Berkum NL, et al. Mediator and cohesin connect gene expression and chromatin architecture. Nature. 2010;467:430–5.
Lifanov AP, Makeev VJ, Nazina AG, Papatsenko DA. Homotypic regulatory clusters in drosophila. Genome Res. 2003;13:579–88.
Segal E, Raveh-Sadka T, Schroeder M, Unnerstall U, Gaul U. Predicting expression patterns from regulatory sequence in drosophila segmentation. Nature. 2008;451:535–40.
Bilu Y, Barkai N. The design of transcription-factor binding sites is affected by combinatorial regulation. Genome Biol. 2005;6:R103.
Morgan XC, Ni S, Miranker DP, Iyer VR. Predicting combinatorial binding of transcription factors to regulatory elements in the human genome by association rule mining. BMC Bioinformatics. 2007;8:445.
The ENCODE Project Consortium. An integrated encyclopedia of DNA elements in the human genome. Nature. 2012;489:57–74.
Xie D, Boyle AP, Wu L, Zhai J, Kawli T, Snyder M. Dynamic trans-acting factor colocalization in human cells. Cell. 2013;155:713–24.
Xu J, Shao Z, Glass K, Bauer DE, Pinello L, Van Handel B, et al. Combinatorial assembly of developmental stage-specific enhancers controls gene expression programs during human erythropoiesis. Dev Cell. 2012;23:796–811.
Giannopoulou EG, Elemento O. Inferring chromatin-bound protein complexes from genome-wide binding assays. Genome Res. 2013;23:1295–306.
Rubio ED, Reiss DJ, Welcsh PL, Disteche CM, Filippova GN, Baliga NS, et al. CTCF physically links cohesin to chromatin. PNAS. 2008;105:8309–14.
Blei DM. Probabilistic topic models. Commun ACM. 2012;55:77–84.
Blei DM, Ng AY, Jordan MI. Latent dirichlet allocation. J Mach Learn Res. 2003;3:993–1022.
Bicego M, Lovato P, Ferrarini A, Delledonne M. Biclustering of expression microarray data with topic models, 2010 20th International Conference on Pattern Recognition (ICPR). 2010. p. 2728–31.
Gerber GK, Dowell RD, Jaakkola TS, Gifford DK. Automated discovery of functional generality of human gene expression programs. PLoS Comput Biol. 2007;3:e148.
Joung J-G, Fei Z. Identification of microRNA regulatory modules in Arabidopsis via a probabilistic graphical model. Bioinformatics. 2009;25:387–93.
Teh YW, Jordan MI, Beal MJ, Blei DM. Hierarchical dirichlet processes. J Am Stat Assoc. 2006;101:1566–81.
McLean CY, Bristor D, Hiller M, Clarke SL, Schaar BT, Lowe CB, et al. GREAT improves functional interpretation of cis-regulatory regions. Nat Biotech. 2010;28:495–501.
Blattler A, Yao L, Wang Y, Ye Z, Jin VX, Farnham PJ. ZBTB33 binds unmethylated regions of the genome associated with actively expressed genes. Epigenetics Chromatin. 2013;6:13.
Cantor AB, Orkin SH. Transcriptional regulation of erythropoiesis: an affair involving multiple partners. Oncogene. 2002;21:3368–76.
Xu X, Bieda M, Jin VX, Rabinovich A, Oberley MJ, Green R, et al. A comprehensive ChIP-chip analysis of E2F1, E2F4, and E2F6 in normal and tumor cells reveals interchangeable roles of E2F family members. Genome Res. 2007;17:1550–61.
Vermeulen M, Eberl HC, Matarese F, Marks H, Denissov S, Butter F, et al. Quantitative interaction proteomics and genome-wide profiling of epigenetic histone marks and their readers. Cell. 2010;142:967–80.
Bailey SD, Zhang X, Desai K, Aid M, Corradin O, Cowper-Sal Lari R, et al. ZNF143 provides sequence specificity to secure chromatin interactions at gene promoters. Nat Commun. 2015;2:6186.
White RJ. Transcription by RNA polymerase III: more complex than we thought. Nat Rev Genet. 2011;12:459–63.
Chinenov Y, Kerppola TK. Close encounters of many kinds: Fos-Jun interactions that mediate transcription regulatory specificity. Oncogene. 2001;20:2438–52.
Kannan MB, Solovieva V, Blank V. The small MAF transcription factors MAFF, MAFG and MAFK: current knowledge and perspectives. Biochim Biophys Acta. 2012;1823:1841–6.
Blais A, Dynlacht BD. Hitting their targets: an emerging picture of E2F and cell cycle control. Curr Opin Genet Dev. 2004;14:527–32.
Bockamp E-O, Fordham JL, Göttgens B, Murrell AM, Sanchez M-J, Green AR. Transcriptional regulation of the stem cell leukemia gene by PU.1 and Elf-1. J Biol Chem. 1998;273:29032–42.
Hoffman MM, Ernst J, Wilder SP, Kundaje A, Harris RS, Libbrecht M, et al. Integrative annotation of chromatin elements from ENCODE data. Nucleic Acids Res. 2013;41:827–41.
Frietze S, O’Geen H, Blahnik KR, Jin VX, Farnham PJ. ZNF274 recruits the histone methyltransferase SETDB1 to the 3′ ends of ZNF genes. PLoS ONE. 2010;5:e15082.
Fleming JD, Pavesi G, Benatti P, Imbriano C, Mantovani R, Struhl K. NF-Y coassociates with FOS at promoters, enhancers, repetitive elements, and inactive chromatin regions, and is stereo-positioned with growth-controlling transcription factors. Genome Res. 2013;23:1195–209.
Ge Y, Jensen TL, Matherly LH, Taub JW. Synergistic regulation of human cystathionine-beta-synthase-1b promoter by transcription factors NF-YA isoforms and Sp1. Biochim Biophys Acta. 2002;1579:73–80.
Magan N, Szremska AP, Isaacs RJ, Stowell KM. Modulation of DNA topoisomerase II alpha promoter activity by members of the Sp (specificity protein) and NF-Y (nuclear factor Y) families of transcription factors. Biochem J. 2003;374:723–9.
Alimov AP, Park-Sarge O-K, Sarge KD, Malluche HH, Koszewski NJ. Transactivation of the parathyroid hormone promoter by specificity proteins and the nuclear factor Y complex. Endocrinology. 2005;146:3409–16.
Nicolás M, Noé V, Ciudad CJ. Transcriptional regulation of the human Sp1 gene promoter by the specificity protein (Sp) family members nuclear factor Y (NF-Y) and E2F. Biochem J. 2003;371:265–75.
Zhu J, Giannola DM, Zhang Y, Rivera AJ, Emerson SG. NF-Y cooperates with USF1/2 to induce the hematopoietic expression of HOXB4. Blood. 2003;102:2420–7.
Farnham PJ. Insights from genomic profiling of transcription factors. Nat Rev Genet. 2009;10:605–16.
Gordân R, Hartemink AJ, Bulyk ML. Distinguishing direct versus indirect transcription factor-DNA interactions. Genome Res. 2009;19:2090–100.
Neph S, Vierstra J, Stergachis AB, Reynolds AP, Haugen E, Vernot B, et al. An expansive human regulatory lexicon encoded in transcription factor footprints. Nature. 2012;489:83–90.
Crocker J, Abe N, Rinaldi L, McGregor AP, Frankel N, Wang S, et al. Low affinity binding site clusters confer hox specificity and regulatory robustness. Cell. 2015;160:191–203.
Zentner GE, Kasinathan S, Xin B, Rohs R, Henikoff S. ChEC-seq kinetics discriminates transcription factor binding sites by DNA sequence and shape in vivo. Nat Commun. 2015;6:8733.
Chen X, Xu H, Yuan P, Fang F, Huss M, Vega VB, et al. Integration of external signaling pathways with the core transcriptional network in embryonic stem cells. Cell. 2008;133:1106–17.
Yan J, Enge M, Whitington T, Dave K, Liu J, Sur I, et al. Transcription factor binding in human cells occurs in dense clusters formed around cohesin anchor sites. Cell. 2013;154:801–13.
Thurman RE, Rynes E, Humbert R, Vierstra J, Maurano MT, Haugen E, et al. The accessible chromatin landscape of the human genome. Nature. 2012;489:75–82.
Buenrostro JD, Giresi PG, Zaba LC, Chang HY, Greenleaf WJ. Transposition of native chromatin for fast and sensitive epigenomic profiling of open chromatin, DNA-binding proteins and nucleosome position. Nat Meth. 2013;10:1213–8.
Pique-Regi R, Degner JF, Pai AA, Gaffney DJ, Gilad Y, Pritchard JK. Accurate inference of transcription factor binding from DNA sequence and chromatin accessibility data. Genome Res. 2011;21:447–55.
Sherwood RI, Hashimoto T, O’Donnell CW, Lewis S, Barkal AA, van Hoff JP, et al. Discovery of directional and nondirectional pioneer transcription factors by modeling DNase profile magnitude and shape. Nat Biotech. 2014;32:171–8.
Langmead B, Trapnell C, Pop M, Salzberg SL. Ultrafast and memory-efficient alignment of short DNA sequences to the human genome. Genome Biol. 2009;10:R25.
Guo Y, Mahony S, Gifford DK. High resolution genome wide binding event finding and motif discovery reveals transcription factor spatial binding constraints. PLoS Comput Biol. 2012;8:e1002638.
Mahony S, Auron PE, Benos PV. DNA familial binding profiles made easy: comparison of various motif alignment and clustering strategies. PLoS Comput Biol. 2007;3:e61.
We thank Tatsunori Hashimoto and Haoyang Zeng for insightful comments and assistance in analysis. This work was supported by National Institutes of Health (grant 1U01HG007037-01 to D.K.G).
Availability of data and materials
The software, source code, and data for running RMD are freely available at http://groups.csail.mit.edu/cgs/gem/rmd/.
YG conceived the project, developed the method, analyzed the results, and drafted the manuscript. DKG supervised the project, helped interpret the analysis, and helped to draft the manuscript. Both authors read and approved the final manuscript.
The authors declare that they have no competing interests.
Ethics approval and consent to participate
(PDF 1964 kb)
The RMD module matrix and summary for K562, GM12878, and K562 direct versus indirect binding, respectively. (XLS 181 kb)
The overlap between RMD modules and published protein-protein interactions. (XLS 25 kb)
TF diversification scores and random forest prediction accuracies. (CSV 609 bytes)
About this article
Cite this article
Guo, Y., Gifford, D.K. Modular combinatorial binding among human trans-acting factors reveals direct and indirect factor binding. BMC Genomics 18, 45 (2017). https://doi.org/10.1186/s12864-016-3434-3
- Computational genomics
- Transcription factor
- Combinatorial binding
- Direct and indirect binding
- Topic model
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506676.95/warc/CC-MAIN-20230925015430-20230925045430-00429.warc.gz
|
CC-MAIN-2023-40
| 59,353
| 158
|
https://math.stackexchange.com/questions/3375473/self-learning-calculus-where-does-langs-first-course-in-calculus-stay-when-com/3375941
|
code
|
I need Calculus book that suits my level. (Or at least primary book which I will follow more closely)
I don't have much formal education but have recently read Lang's Basic Mathematics (cover to cover, doing
all almost all of the exercises).
In my search for books I found:
Spivak - Calculus
Apostol - Calculus 1 and 2
Courant - Introduction to Calculus and Analysis vol 1 and 2
Lang - First Course in Calculus
I found info about the other books, and it seems that Apostol's Calculus will be best suited for self learner and covers more than Spivak (also gives some applications), while Courant covers even more than Apostol but has less and harder problems.
I have read a little bit (about derivatives, limits) of Lang's book and it's quite easy to follow.
So, where Lang's book stays? Are the other books too advanced for me?
Which book I should use as primary text if time is a concern and I want to cover more things? (Sorry if that's too many questions)
There is a thing in the college I want to go called "Mathematics and Informatics" (that's in Eastern Europe).
I'm kind of in hurry because I want to get in college and I have the chance to skip the first year if I know enough. (first year is mostly C++, Calculus and little bit of Linear Algebra and I already know enough C++)
So if I'm to skip the first year I should be ready for the Mathematical Optimization, Discrete Math, Differential Equations, Information theory (IDK if I translate correctly). All of these are intro level.
I have around 6-7 months before I try to get the exams. And little bit more before I eventually start college.
BTW, Basic Mathematics was quite challenging and there have been some exercises that got me to look at the back of the book for solutions or search on the internet. Don't wanna make it sound like I've done 100% of them (someone in my situation my get discouraged after reading this), but I tried.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644817.32/warc/CC-MAIN-20230529074001-20230529104001-00770.warc.gz
|
CC-MAIN-2023-23
| 1,899
| 17
|
https://discussions.apple.com/thread/5706538
|
code
|
I have an iPhone 4 and recently updated to ios 7. The trouble is that when I did I lost the 3G function. I have tried doing a network restore but this only brought 3G back for a short period. Can anyone help?
You posted in the iPad forum instead of the iPhone forum. To get answers to your question, next time post in the proper forum. See https://discussions.apple.com/index.jspa I'll request that Apple relocate your post.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111868.79/warc/CC-MAIN-20160428161511-00034-ip-10-239-7-51.ec2.internal.warc.gz
|
CC-MAIN-2016-18
| 424
| 2
|
http://www.evrenayan.net/error-item-not-crawled-due-to-one-of-the-following-reasons/
|
code
|
You may see the following error in the crawl log even though you have configured the SharePoint Search Service properly. In this case, search will interrupt the indexing.
Item not crawled due to one of the following reasons: Preventive crawl rule; Specified content source hops/depth exceeded; URL has query string parameter; Required protocol handler not found; Preventive robots directive.
As a solution to this situation, you can create and upload a robots.txt file to your site’s root address. To do this, write the following content into an empty notepad and save it as robots.txt.
User-agent: MS Search 6.0 Robot
To upload it to the root of your site, you must first connect to your site with SharePoint Designer. Then you can copy / paste to the same level as the document libraries by clicking the “All Items” link in the left menu.
After this, you can try to run full crawl again.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710690.85/warc/CC-MAIN-20221129064123-20221129094123-00635.warc.gz
|
CC-MAIN-2022-49
| 895
| 6
|
https://pt.slideshare.net/missrogue/your-customers-journey-in-the-social-era
|
code
|
I presented this at the United Benefits Advisors' spring conference in May - to answer the question, "Why would benefits advisors use social media?" I presented it like this: The customer journey is non-linear and unpredictable. It goes online/offline/and more. You need to be on that path in as many places as possible. Social is a good chunk of that now.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644817.32/warc/CC-MAIN-20230529074001-20230529104001-00290.warc.gz
|
CC-MAIN-2023-23
| 356
| 1
|
http://www.verycomputer.com/278_dbf32d564c5b15bd_2.htm
|
code
|
> > If you create a Shortcut to the html file & place that on the Desktop, then
> > you can change the icon & easily rename. Just Rt click the shortcuts icon
> > to get to its Properties.
> This is for distribution on a CD--- would a shortcut work on a CD??
A relative path is the path relative to where the shortcut sits. So if
your folder tree looked like:
top to bottom and your Shortcut icon was located in:
then the relative path to a documment in:
the relative path to a documment in:
and if they are both in the same folder it's just the Documment_name.htm
alone. If you have to put something for the "Start in" path in that case
just drop it back one and point it to itself like:
This kind of relative path works in unix (all) AmigaDos and Windows (all)
and I think in MS-Dos from 2.0 and up. Mac maybe different. I donno.
That's it. All done.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145839.51/warc/CC-MAIN-20200223185153-20200223215153-00453.warc.gz
|
CC-MAIN-2020-10
| 851
| 15
|
https://www.freelancer.cn/job-search/get-live-help-online-interior-design-mathematics/
|
code
|
Need a database: Location: Russia (best if in Moscow or nearby) Industry: furniture stores, interior shops, interior designers. 150 records Company name 100% Address 100% Postal code 100% City 100% Phone number 100% Year of establishment 100% Other data if available (email, website) Deadline: 20.11
I am looking for an ex...bug search & fix and improvements. The source code is python and requires some decent mathematical skills to read. Beside python skills it is required to have skills in mathematics, non-linear optimization, python related libraries, etc. We also might need to inspect some C+ code as well in order to compare to the python code.
...Basic Requirements: • Bachelor’s degree in Mathematics or equivalent experience • 5+ years of professional probability game development experience • Ability to write clear, functional code in Python or R • Written and verbal communication skill in English Advantages if you have: • Experience in probability design and implementation of probability ca...
I am l...can spend a few hours to teach me how to write C++ code that executes an actual decision(in this case buying/selling a stock). This does not need to involve huge API's or mathematics, I just want to be able to use C++ code to actually make decisions. I am a pretty good c++ programmer so you will not need to teach me how to write code in c++.
comprehensive understanding of theory and mathematics of stenography or information hiding in general. learn how to program with matlab and how to implement project specific functions/algorithms for multi media stenography. the application of stenography for digital multi media content. explore teh application of these algorithms. create GUI in matlab
I am looking to build a house. I need help to design exterior and interior house plans and 3D model. I know what I need in terms of the house design. Basically I have the idea and it model but need to put together the house plans.
DO NOT BID UNLESS YOU R...follow the format/style of the sample file. You can handwrite or use any software but it must be readable. In order to do this, you need to have good knowledge of discrete mathematics/structures and algorithms. Before making a bid please be sure to read the details and make sure if you are capable of doing the task or not.
...to our new office. We are very happy with the new place. But the interior and the decoration deserve attention. We would like to upgrade our office to a nice workplace with an industrial look. A place where people want to be and feel like home. We are looking for someone who can help us with a beautiful layout (renders) of our office building. So that
Candidates in scientific writing are required to specialize in mathematics, mathematics statistics, experience and be required to be Egyptian or Arab nationality
Hey.... I`m searching for some photographer who can take me nice interior photos...No need like royal photos. I need a nice, easy, warm light photos(like your home photos) The main goal is a nice clean wall and some furniture, I want to put my product on the wall, it will be like an advertising wall... Kid room, kid playroom, living room, public room(coffee
...Native English writers(AUS) only. Must be able to write in AUS English(not US/UK). Based on the quality and consistency of the work, more projects may be awarded. The writer will get consistent work for a long time. Only original work accepted. Plagiarism is/will not be tolerated and will lead to cancellation of the contract. Printable MS-Word documents articles
I am working in a research project and need to code some data collected through a survey and interviews, pre...be analyzed. The main purpose of this research study is to find out how teacher preparation programs in New York and California prepare pre-service teachers, especially mathematics teachers, to integrate technology for teaching and learning.
we need a graphic design concept that we can create e-learning videos cartoons based on it. the design should have 1- transition scene 2- backgrounds 3-movement concept 4- icons based on the material (math graphs, cars , apples and any icons) 5- fonts and headlines 6- how the headers would look like 7- more than one colour variation the can work
We have a new house and are looking for a experienced and certified interior decorator for the house. The list of items are mentioned below: 1.
We need a design for a 19m2 vacation apartment near the sea. First thing -> Please download and read: " read me _ [登录来查看链接]" and "read me 2_work [登录来查看链接]" Here, all details are provided. *********** A note to all Freelancers. This project serves to find compatible freelancers to work on further individual projects. ****************
The purpose for this contest is to provide us for a design to our store, the store description is : 1- the store is about 400 m2 of space on the first floor in Kuwait, the floor plan is provided on the files( first floor, plot 354) has streets for 3 sides and connected with another building on the last side 2- the store has 80 gaming computer, 6
...need of test makers for basic mathematics concepts from 1st to 7th grades standards. We need thousands of questions. The tests to make should be multiple choice and simple. Attached is the list of categories that the test questions should follow. You should be a math teacher or a university student taking up mathematics courses to be qualified for the
We focus on helping kids of school-going age learn about animals, spelling, geography, mathematics and simple science experiments, among other many great topics and resources. In respect to that, we are looking to hire and work with professional writers who are capable of taking young readers on a journey that makes them read engage in hands-on activities
You need to create some formulas, theorems, lemma for certain methodologies based on reversible computing. Information will be provided. Bid only if you have relevant s...lemma for certain methodologies based on reversible computing. Information will be provided. Bid only if you have relevant skills and confident about your proficiency in mathematics.
Emerging design studio, we have a variety of projects. You will have to come into our office from time to time. It's an going project. You must be located in Hong Kong. Suitable for new moms who have baby/kids duties. Payment depends on skill level.
1) Comprehensive understanding of the theory and mathematics of steganography or information hiding in general.(through regular meetings,logbook,refrence list and report and viva) 2) Learn how to program with MATLAB and how to implement project specific functions/algrothimas for multimedia steganography. If time permits create GUI in Matlab. 3) The
...every one I am looking for a very professional Android developers who are proficient in programming, design, etc. ... in order to build an educational application for Android. The application should be very similar to “khan academy” application in design and the way they structure lessons and videos. Here is a link to it in playstore (please take a look
...register and post these articles like instructable and Arduino forum. Here are details of the requirement. 0. Responsive. 1. Science , technology , Engineering and mathematics related posts including images, video link (YouTube and other), programing code, Mathematical formulas and documents and file attachment option like word, PowerPoint, excel
This is...check if the solution provided is correct, and check the originality of the questions (given an original test to compare with). You must have a strong knowledge in basic mathematics and non-verbal reasoning and has perfect skill in British English. Please don't hesitate to place a bid if you think you are qualified for the job. Thank you!
...for someone to deliver an interior design sketch (like the attached), wither drawn by hand or electronic. Would much prefer not to have 3D render, unless it looks super lifelike and thought has gone into the design. The sketch must be creative and unusual and you must use your own creative talent to come up with the design based on a set of requirements
...the skin and an ancient god at the side and (or) back of the truck ), on the lightbox appear the phrase Trans Athens (lightbox is located at the top of the front truck) - Interior of the truck to flag colors and dashboard to light blue colours and blue and steering wheel blue - white color - Exterior sking for truck DAF XF 105 (Greek Flag applied to
...Algorithmic and Data Mining projects. Must understand how to program advance machine learning clustering algorithms and other data mining algorithms. Must be also good at Mathematics and Computer Science to understand complex data coding needs. Must be expert in Python, Linux, Unix, C/C++, CUDA programming. Several Projects (Major, Big and Small) will
I need some one who has expertise on Interior design as well as 3D rendering skills. So this job is to rendering realistic view of the interior space in 3D. Prefer I can walk around on multiple view points (Virtual tour) experience Please NOTE I HAVE ALREADY GOT THE FOLLOWING FILES: I already have CAD files and Revit files for Preliminary Site
...work on Algorithmic and Data Mining projects. Must understand how to program genetic algorithms, the 8-queens problem and other data mining algorithms. Must be also good at Mathematics, Statistics and Computer Science to understand coding needs. Must be expert in R, R-studio and/or Python programming. Coding examples will be given to provide details of
We are Vanessa Larré, an architectural and design firm from Balneário Camboriú - BR, and we need about 16 rendered images of one of our projects. It will be only the interior of a house and we've got the base model on sketchUP. Deadline in 2 weeks.
...a user friendly app/soft ware which is targeted to be used by a person without academic mathematics knowledge. The app/soft ware have to get data from costumer and feed the data to the code and return the results. The app/soft ware should have the ability to get expanded as we develop the model. The app/soft ware should be compatible to be used by different
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742963.17/warc/CC-MAIN-20181115223739-20181116005739-00268.warc.gz
|
CC-MAIN-2018-47
| 10,241
| 31
|
http://myxpcar.com/visual-studio/ssis-script-cannot-show-visual-studio-for-applications-editor.php
|
code
|
SSIS is a VS-related application, not SSMS." Absolutely, but SSIS/BIDS ships with SQL Server and the VSTA environment is also part of that install, hence the SQL repair suggestion. Follow Me!Subscribe by emailEnter your email address: Recent Posts Azure SQL Server 2016 VM PASS Summit Announcements: SQL DW free trial PASS Summit Announcements: Power BI reports on-prem in SSRS PASS Appears that there is a version problem here as it is looking for the version 18.104.22.168 VSTA stuff in the GAC with one PublicKeyTokey and when I view the windows/assembly folder, Finally, I would like to have an inkling for how this happened, where one day it's working and the next not. navigate here
Create Integration Services Project3. and of course the link to the asp page shows less than nothing. Only the SSDT and SSIS. How it is possible! http://stackoverflow.com/questions/12706780/cannot-show-visual-studio-tools-for-application-editor-in-ssis-2012
SQL Server Data Tools (SSDT) – December 2012 update → Solution to VSTA editor error in SSIS 2012 Posted on December 13, 2012 by James Serra The script task in SSIS Reply RKO says: January 14, 2014 at 12:17 am That's great!!! Note: corrective action requires Administrator privileges. Steve - thanks for the location of those MSIs.
I had the same issue, I then started with a fresh (virtual) PC, I ran SQL Server 2012 installer, and installed DB engine, SSIS and SSDT at the same time. You can install the component manually from your installation disk or download from the \redist\VSTA\runtime\ folder. Try Free For 30 Days Join & Write a Comment Already a member? Microsoft.visualstudio.tools.applications.core Download View all posts by James Serra → This entry was posted in SQLServerPedia Syndication, SSIS.
How does Gandalf end up on the roof of Isengard? Microsoft Visual Studio Tools For Applications 2012 Download If you also copy the install files of SQL Server 2012 to a file system from the DVD, you can install it from the file system instead of playing with the This didn't exists when we were running SQL SERVER 2008 .... This file will delete the keys (reg delete) mentioned, and also remove various directories (rd) related to loading SSIS settings in VSTA.
I look forward to getting your response, and getting this working!! using the following works: cd "C:program files (x86)microsoft visual studio 9.0common7ide" vsta.exe /hostid SSIS_ScriptTask vsta.exe /hostid SSIS_ScriptComponent But when I try to create a new script component and click edit script, Cannot Show Visual Studio 2012 Tools For Applications Editor SQL Server Developer Center Sign in United States (English) Brasil (Português)Česká republika (Čeština)Deutschland (Deutsch)España (Español)France (Français)Indonesia (Bahasa)Italia (Italiano)România (Română)Türkiye (Türkçe)Россия (Русский)ישראל (עברית)المملكة العربية السعودية (العربية)ไทย (ไทย)대한민국 (한국어)中华人民共和国 (中文)台灣 (中文)日本 (日本語) Microsoft Visual Studio Tools For Applications 2010 Please tell me that I do not need to uninstall SQL server and repeat that again..
I think now I should have put VS as the first reference of this post. check over here When I check the Event Viewer, it states the following: The global template information is out of date. Submit Posted by J. Now, let"s see if my package will upgrade now without any error? Cannot Show Visual Studio 2008 Tools For Applications Editor Failed To Create Project At Location
For more information, see Help and Support Center at go.microsoft.com/…/events.asp When I try to run the command (as Administrator) I get the following: Command Line is not valid. Bookmark the permalink. ← Can I use SSDT for the 2008 BI stack? Thanks for this information, looks like it helped few people. http://myxpcar.com/visual-studio/ssis-cannot-show-visual-studio-tools-for-applications-editor.php Please open the Programs and Features window, and check which of the following components are not installed (supposing it is a 64-bit SQL Server platform): Microsoft Visual Studio Tools for Applications
I can't seem to find the 'Programs and Features window'. Vs 2012 Shell Isolated If you can't do that try the manual approach below…. Posted by Microsoft on 1/10/2012 at 3:42 PM We were unable to reproduce this error.
Followed by the following commands to re-register SQL Server Integration Services components: SQL Server 2008: cd "C:\program files (x86)\microsoft visual studio 9.0\common7\ide\" vsta.exe /setup /hostid SSIS_ScriptTask vsta.exe /setup /hostid SSIS_ScriptComponent This As you might notice,the x64 Runtime is now missing… No problem,let"s install Integration Services now. thanks everyone ! Script Task Won't Open Editor In Visual Studio 2013 Create a new SSIS project using All Programs/Microsoft SQL Server 2008 R2/SQL Server Business Intelligence Development Studio2.
I've been reviewing a number of sites with those who have experienced similar difficulties, and several have reported that uninstalling and reinstalling Visual Studio does not solve the problem for them. It should look like this… and when you click the “Edit Script…” button the Script editor VSTA windows opens when it works (which is most of the time). Reply Jeff 14/10/2014 at 21:13 Pieter, Thanks for the thorough analysis. weblink I had the exact same issue, with the same scenario of installing SSIS originally, then going back and adding SSDT afterwards.
I solved my problem using vsta.exe /hostid SSIS_ScriptComponent… I spended many hours befoure I found you. Weaver by WeaverTheme.com Send to Email Address Your Name Your Email Address Cancel Post was not sent - check your email addresses! You may not have anything Express installed.Arthur My Blog Monday, April 14, 2014 6:04 PM Reply | Quote Moderator 0 Sign in to vote I have SQL Server Developer edition and Reply Leave a Reply Cancel reply Your email address will not be published.
Teenage daughter refusing to go to school C++ calculator using classes Possible repercussions from assault between coworkers outside the office Can I sell a stock immediately Performance difference in between Windows Click the "Edit Script" button ---> error appearsIt does not matter if Language is set to C# or VB.Net. I seemed to need the design time as well as the run time versions installed in order to execute the packages. How do I make an alien technology feel alien?
Ugh.Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510969.31/warc/CC-MAIN-20181017023458-20181017044958-00325.warc.gz
|
CC-MAIN-2018-43
| 6,601
| 14
|
https://github.com/vibe-d/vibe.d/pull/270
|
code
|
Join GitHub today
Add reading encoded file from disk if present #270
This commit adds the ability to send compressed files if present
To test it out
This fixes part of #143. I'd like to see the needed change for libevent. I'm curious what's the impact on performance. Is somebody working on it?
Thanks! I meant to implement that for a while now, good to have that available now.
Regarding the libevent2 issue, if you have some time to look at this it would be great. I currently have so much stuff that has higher priority that I won't get to it in the near future. The (wrong) code is in libevent2_tcp.d line 267 and following. For some reason it crashes and I couldn't find sufficient documentation or a working example to compare against. Maybe
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160568.87/warc/CC-MAIN-20180924145620-20180924170020-00155.warc.gz
|
CC-MAIN-2018-39
| 747
| 7
|
http://labs.gerbenrobijn.nl/2008/01/03/fitc-amsterdam-25-discount/
|
code
|
On the 25 and 26 of Februari it’s time for Flash in the Can to visit Europe for the first time in their history.
And we the dutch have the honour to host this great event in Amsterdam!
FITC isn’t just about flash but much more like flex,actionscript, design, interaction, etc, etc.
Their will be some great speakers in Amsterdam like:
- Aral Balkan
- Joshua Davis
- Peter Elst
- Colin Moock
- Erik Natzke
- And much much more!
So get your tickets now at fitc and use the code “labs25” to get a 25% discount when ordering your tickets!
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 542
| 11
|
https://forum.arduino.cc/t/basic-if-statement-issue-in-a-large-program/166078
|
code
|
See attached programme.
This is a very much cut down version of the whole program, but it is still bugging me.
On line 323 “if (millis() >= testing)” for some reason this If statement is not If-ing when Millis() is greater than or equal to “testing” until much later.
A few lines before I have set testing to be 4000 larger than the then current millis(), which is located inside an “if autoApos == 2”. Then autoApos is changed to 3 and the next If statement should run if autoApos == 3 && millis() >= testing.
What I am finding is that millis() can be 40000 when “testing” is 30000 (ish) before the If statement runs. I have tried many variants but I always get the same result, millis() is always 10000 to 20000 larger than the “testing” value.
No doubt someone will spot the stupid and obvious error instantly, my eyes are still fuzzy after writing the 30k programme (only 9.4k uploaded, the rest isn’t in use at this point).
I have been using Serial.print to tell me the values of different variables, out of all the different parts it only appears to be the If statement on line 323 which doesn’t add up.
If I remove the millis() if statement and just have the “if autoApos == 3” it works fine (but instantly).
The final idea is to have a random time form 3 to 10 seconds for the delay.
Another interesting thing is when the time delay If was inside the autoApos == 3 If, I had a Serial.print after the dodgy combined If autoApos && delay statement which should have ran all the time for testing (causing the MCU to run slow). This would work fine until autoApos == 3, where it would stop until the If statement was satisfied. This made no sense as the Serial.print was after the If statement (outside completely) and would otherwise run except if the If statement was 1/2 true (when autoApos == 3).
I know the If statements should me If (previous_time <= millis() - number). I’ve currently written them with a + for simplicity. This program will not run for more than 10 hours.
To_upload.ino (57.2 KB)
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305141.20/warc/CC-MAIN-20220127042833-20220127072833-00101.warc.gz
|
CC-MAIN-2022-05
| 2,037
| 12
|
http://www.edugeek.net/forums/wireless-networks/25226-asset-management-tracking.html
|
code
|
Have you looked at GLPI - Gestionnaire libre de parc informatique ?
Hi all, I've spent soo long looking for some software to fit my spec as close as possible for this. What I'm after is a web-based asset management, or more importantly asset tracking system (pref in php/MySQL). I have been currently using the Hardware Management System from MST Software which has been fine for me to use, but now I'm having problems using it on Vista/Server 2008 plus i prefer web based apps as I tend to forget the laptop!!
My biggest difference to most of the threads on here is that i don't want it to discover anything, just a manual entry. the reason for this is we have several bases/units on seperate sites but I manage all the kit and it's on 'my' stockbook.
I like the look of IRM but there seems to have been no movement with the project for a few years, the guy has posted on edugeek a few times.. (budgester?)..
I've just come across one on sourceforge here
I'm using One or Zero for the helpdesk so that aspect isn't needed as such, but if I can get one that does both then thats great!
Has anyone another one I've missed, or any experiance of these?
briefly, I've downloaded it but it looked a bit "too much" if you know what i mean, I'll have another look at it and see whats what!
+1 for GLPI. We've recently moved to this and once over the initial learning curve its very good.
Have to agree here. GLPI combined with OCS Inventory NG (OCS Inventory NG - Welcome to OCS Inventory NG web site !) makes asset management easy.
OCS audits your hardware. GLPI can then plug into your OCS repository. PM me if you would like more info
Last edited by tosh74; 15th October 2008 at 09:58 AM. Reason: bad speeling
I know exactly what the TS is saying,
It sounds as if he wants a straightforward fixed asset solution, web based front end that updates a central database so that he doesn't have to worry about syncing handheld devices and other issues with keeping data consistent.
And with such a system you want to be able to modify the tables and fields in a straightforward manner.
He doesn't want auto-discovery of IT assets as that adds more complexity to the app that he doesn't require, now if GLPI and OCS can do the fixed assets while avoiding having to setup other features and is straightforward to setup then that would be a good recommendation - but x number of features and capapbilities normally means complexity you don't need......and perhaps it would be an idea to recommend a purely fixed asset system aswell as a complete enteprise asset management system like GLCI/OCS.
The only system i've seen that does the job of fixed asset tracking well is unfortunately a propreitory solution, and it can get quite expensive. I've been looking for some time for a cheap or open source solution for fixed asset management, that is client-server and web based front-end but i haven't really found anything good enough. GLCI/OCS look great for EAM but as someone has said i think there is going to be a learning curve to get over with such a full featured product.
Last edited by torledo; 15th October 2008 at 10:58 AM.
You can do fixed manual asset tracking with GLPI, you just don't bother setting it up to use OCS and then you can add/edit stuff to your hearts content manually.
After taking a look at GLPI again I think it's a go, just need to set it up on a test server and see what I can cut out of the interface that I don't need.
I need to work out a way to make it look better as well! I assume you can change them in a cp somewhere!
I'll update thi when I get somewhere with some feedback, thanks for your replies!
How does GLPI work with pocket pc's as barcode scanners.....i..e is there a windows mobile front end for updating from handhelds.
how do you get glpi to read barcodes
can you use a barcode scanner with this software?
Most pc connected barcode scanners can be set to just dump the barcode they have just read as text into whatever application you have open at the time.
So if it's a webbased system then you go to the serial number field and scan the item?
There are currently 1 users browsing this thread. (0 members and 1 guests)
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826025.8/warc/CC-MAIN-20140820021346-00235-ip-10-180-136-8.ec2.internal.warc.gz
|
CC-MAIN-2014-35
| 4,154
| 28
|
http://eat3d.com/forums/general-chat/patches-grass
|
code
|
Patches of grass
I wasn't sure where to put this thread, but this looked like the right topic.
I'm kinda wondering about something lately. I've been playing a lot of Call Of Duty 4 and some Viking: Battle for Asgard and in these games are a lot of patches of grass to make it look like the grassy area is pretty dense.
I was wondering if someone (maybe Riki? ) knows: Do developers put in every single one of these patches manually? Or do they have some kind of Painting software?
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700014987/warc/CC-MAIN-20130516102654-00074-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 480
| 4
|
https://arduino.stackexchange.com/questions/67566/lipo-battery-level-display-using-nodemcu
|
code
|
I'm designing a connected collar for pets for a project, for this I am using a nodemcu on ESP8266 powered by a Lipo battery I'm guessing around 1000mAh of capacity is enough for a respectable autonomy (? any advices). Lipo model : https://www.lipolbattery.com/lithium%20polymer%20battery.html In order to charge the battery I have found this USB LiPo charger (https://learn.sparkfun.com/tutorials/lipo-usb-charger-hookup-guide/all), my first question is: Is it still usable for higher capacity LiPo batteries (1400mA)? Also I would like to display the battery level via and RGB LED (example: green is for battery FULL battery and RED for LOW battery level in need of charging) is there any way to do this? Thank you
It is still usable for higher capacities. I recommend you tp4056 and some step up converter. You can use resistor divider (that drops safely voltage to nodemcu 3.3v) and connect its output to an analog pin of your nodemcu. Values of resistors must be high, like 10k. You can simply connect your diode to nodemcu and code if voltage is less than 3.2v let the red diode turn on, when above 3.8v let the green diode turn on
Answering the second question "Also I would like to display the battery level...?" (next time, please open a new question for that):
To measure the module voltage level (a.k.a. VCC) of a NodeMCU (which is just a board containing an ESP8266 chip) programmed in Arduino IDE (which I assume given that you ask on the arduino stackexchange), this here applies:
To read VCC voltage, use ESP.getVcc() and ADC pin must be kept unconnected. Additionally, the following line has to be added to the sketch:
This line has to appear outside of any functions, for instance right after the #include lines of your sketch.
So you could just drive a red and a green LED (don't forget the mandatory current limiting resistor) from any other two pins of your NodeMCU. Then switch them in your sketch if VCC falls below 3.2V (you may need to experiment with that because of measurement tolerances; a sketch to start with that would be here.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510454.60/warc/CC-MAIN-20230928194838-20230928224838-00731.warc.gz
|
CC-MAIN-2023-40
| 2,057
| 7
|
https://team.userinterface.us/ai-impact-on-society-australias-ai-camera-catch-drivers-on-phones-china-makes-deepfakes-illegal/
|
code
|
AI Impact On Society: Australia’s AI Camera Catch Drivers on Phones | China Makes Deepfakes Illegal
SEE PREVIOUS VIDEO ON AI IMPACT ON JOBS: https://youtu.be/XWEJVIPJOgQ
In today’s video we take a look at AI impact on society with Australia tech and their implementation of AI cameras to catch drivers using their phones and ticketing them. We also take a look at the latest deepfake China news and lastly the new legislation passed in California regarding Deepfake.
#ai #deepfakes #technews
✅ Get Your Crypto Debit Card from Crypto.com and we both get $50 USD 🙂 https://platinum.crypto.com/r/f2aesqmq9x ✅
Pick up your Crypto Apparel @ www.cryptoblood.io/apparel
DISCLAIMER: THE COMMENTS AND OPINIONS SHARED IN THIS VIDEO ARE OF MY OWN, AND SHOULD NOT BE TAKEN AS FINANCIAL ADVISE. PASS PERFORMANCE IS NOT INDICATIVE OF FUTURE RESULTS – DO YOUR OWN RESEARCH AND DO NOT TAKE MY WORD ON ANY CRYPTOs TALKED ABOUT IN THIS VIDEO, I AM NOT A PROFESSIONAL AND DO NOT HOLD ANY FINANCIAL LICENSES.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413551.52/warc/CC-MAIN-20200531151414-20200531181414-00300.warc.gz
|
CC-MAIN-2020-24
| 1,001
| 7
|
https://wiki.lazarus.freepascal.org/index.php?title=BidiMode&oldid=42342
|
code
|
Some languages (Arabic, Hebrew, Farsi ...) write the characters from the right to the left, and adequate support for this must be implemented on Lazarus.
- 1 BidiMode
- 2 ParentBidiMode
- 3 BidiMode for developers
- 4 Add BidiMode support to LCL components
- 5 How to test RightToLeft reading
- 6 References
bdLeftToRight bdRightToLeft bdRightToLeftNoAlign bdRightToLeftReadingOnly
Default of BidiMode, mean read and order is normal "Left To Right"
Makes a control Right To Left reading text and right alignment depend a kind of the control,
In Delphi if there is Alignment property effect by reverse taLeft to taRight and vise versa, but in Lazarus we will reverse the alignment with FlipControls function, that make more consistency with Anchors and Align properties. Also bdRightToLeft take a Scrollbar at the Left and if there is a Cells or Menus must ordered from the Right.
For Example in bdLeftToRight: Menu1, Menu2, Menu2 in bdRightToLeft: Menu3, Menu2, Menu1
That mean not just a Right Alignment it is Right To Left Order. bdRightToLeft must effect the text reading as depend on the OS if it is supported, the sentence words take as like Cells
in bdLeftToRight: Word1 Word2 Word2 in bdRightToLeft: Word3 Word2 Word1
We are reading the Word1 first then Word2, We read from the Right to Left, but Draw Text is more complicated than normal Cells, in fact the function that Draw the text care about English/Latin words and draw it Left To Right, so if we mixed both languages in the same sentence the draw function drawing continual English words by Left To Right order to make it more readable, but a symbols like as ? or ! considered as Right To Left characters.
For testing purpose i used English word with a symbol to test Right To Left text reading
Like as Word?
For special states or in fact for more compatibly with Delphi there is another values
It is right to left but except Alignment property not reversed, but in Lazarus worked when use FlipControls
It is right to left text reading only, scrollbar and alignment is not take effect.
If it is True the control inherit BidiMode value from the parent.
BidiMode for developers
When you build you own controls and try make your control support the right to left or BidiMode, you must not access the BiDiMode property directly, you must use
function UseRightToLeftAlignment: Boolean; virtual; function UseRightToLeftReading: Boolean; virtual; function UseRightToLeftScrollBar: Boolean; function IsRightToLeft: Boolean;
Because some controls not need to make effect to the alignment for example TButton it always is centered , or TMainMenu it always take Right align if it Right To left.
Not just reading the words and cells, it is every thing depend on the order (positions), for application there is a Menus and Controls that placed on the form it must have position from the right.
Add BidiMode support to LCL components
Most of OS now support RightToLeft but there is controls not supported yet or there is controls make by native language like as (TLabel, TGrid). so we have 3 kinds of control
- Standard Controls: it easy to make it just add some flag to switch it to RightToLeft (TEdit, TList, TComboBox, TCheckBox).
- Standard Controls not supported in OS: There is no idea just waiting the OS developers to implement it to support (TListView, TTreeView), or use native controls already support it.
- Native Controls: more hard work for make it Support RightToLeft, or we must add new control already have this features (TLabel, TGrid).
Make application Right to Left order have 4 phases
Phase 1: Add BidiMode property to TControl
TBidiMode already declaired in Classes <delphi>
property BiDiMode: TBiDiMode read FBiDiMode write SetBiDiMode stored IsBiDiModeStored; property ParentBiDiMode: Boolean read FParentBiDiMode write SetParentBiDiMode default True;
BidiMode must not stored if ParentBidiMode = True <delphi>
function TControl.IsBiDiModeStored: Boolean; begin Result := not ParentBiDiMode; end;
Add virtual functions <delphi>
function IsRightToLeft:Boolean; virtual; function UseRightToLeftAlignment: Boolean; virtual; function UseRightToLeftReading: Boolean; virtual; function UseRightToLeftScrollBar: Boolean; virtual;
BiDiMode property in public and must published in every control need to RightToLeft
Phase 2: Modify Controls
Add RightToLeft to Standard controls,
|TForm||Working||Working||Not Implemented||Not Implemented||Not Implemented||Not Implemented|
|TLabel||Working||Working||Not Implemented||Not Implemented||Not Implemented||Not Implemented|
|TButton||Working||Partially Implemented||Not Implemented||Not Implemented||Not Implemented||Not Implemented|
|TEdit||Working||Partially Implemented||Not Implemented||Not Implemented||Not Implemented||Not Implemented|
|TListBox||Working||Partially Implemented||Not Implemented||Not Implemented||Not Implemented||Not Implemented|
|TComboBox||Working||Partially Implemented||Not Implemented||Not Implemented||Not Implemented||Not Implemented|
|TCheckBox||Working||Working||Not Implemented||Not Implemented||Not Implemented||Not Implemented|
|TStaticText||Working||Partially Implemented||Not Implemented||Not Implemented||Not Implemented||Not Implemented|
|TGroupBox||Working||Working||Not Implemented||Not Implemented||Not Implemented||Not Implemented|
|TRadioButton||Working||Working||Not Implemented||Not Implemented||Not Implemented||Not Implemented|
|Menus||Working||Working||Not Implemented||Not Implemented||Not Implemented||Not Implemented|
|TBitBtn||Working||Working||Not Implemented||Not Implemented||Not Implemented||Not Implemented|
|TSpeedBtn||Working||Working||Not Implemented||Not Implemented||Not Implemented||Not Implemented|
|TRadioGroup||Working||Working||Not Implemented||Not Implemented||Not Implemented||Not Implemented|
|TCheckGroup||Working||Working||Not Implemented||Not Implemented||Not Implemented||Not Implemented|
|TGrid||Partially Implemented||Partially Implemented||Not Implemented||Not Implemented||Not Implemented||Not Implemented|
Phase 4: Functions and Utils useful for multi language application
FlipControls; virtual; Change Right to Left and Left to Right in this properties (Left, Align, Anchors and Alignment) if We compare with Delphi(TM) Anchors and Alignment was excepted
Need funtion to detect language is RightToLeft.
How to test RightToLeft reading
Create new form, then add TButton or TEdit set the caption to OK? you must add ? or ! or . to the last word with out space, now set the BidiMode to bdRightToLeft, you will see the ? in the left of word.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363791.16/warc/CC-MAIN-20211209091917-20211209121917-00546.warc.gz
|
CC-MAIN-2021-49
| 6,527
| 63
|
https://deepai.org/publication/term-community-based-topic-detection-with-variable-resolution
|
code
|
Facing an ever-growing amount of text data, automated methods of text evaluation have become indispensable for finding and analyzing information. Computerized information retrieval started to evolve many decades ago as one of the earliest fields of computer applications but continues to make spectacular progress in the context of recent machine learning developments.
The classical information retrieval task is to serve some information need formulated as a concrete query. However, given the sheer volume of texts available, there are many situations where, before asking detailed questions, one must first gain some insight into what kind of information is contained in the texts at all and what subject areas are covered.
This is where automatic topic detection, also called topic mining or topic modeling, can help. This process takes a text corpus, i.e., a large collection of text documents, as input and produces as output a set of topics which are meant to represent the various subjects written about in the corpus documents. The identification of topics within a corpus can be used in many ways: for a quick overview of the content and a better understanding if little is known about the corpus or its context; for ordering the documents of the corpus, similar to a classification, but more flexible in that it allows one document to be assigned to several topics rather than belonging only to one class; for analyzing the temporal evolution of thematic content or its relation to metadata like authorship or publisher. Computationally, it also can be seen as a method of dimensionality reduction for the corpus documents and as such lends itself as a building block in further machine learning analyses of the documents.
Like with any computerized application, at both ends of the process some translation step is needed: at the input side a quantification which turns the corpus into some mathematical data structure, and at the output side an interpretation of what the algorithmically derived output actually means. The latter step can be highly problematic in situations involving natural language as it carries more ambiguity and context dependency than numerical or highly formalized data. This can be an obstacle for finding indisputable and verifiable interpretations. Therefore, involving subject experts who ideally are well trained in methods of text interpretation is crucial.
While this paper focuses on general and technical aspects of the method rather than on comprehensive domain applications, it is written as a collaboration of a computational data scientist and a political scientist in order to keep a good balance between the computational and the interpretive aspects. Political science is, in fact, one of the domains that benefit most from reliable methods for automated topic discovery: While text is an indispensable source of knowledge about politics, the discipline, in line with the general trend, has recently been confronted with a flood of relevant textual material [1, 2, 3]. The background which motivated the present research is the need to scan and understand the strategic significance of huge amounts of incoming text documents of scientific, political, social and economic nature in a strategic unit of a large research organization.
Regarding the quantification of the corpus, the natural candidate for a data structure is the so called word-document matrix which keeps record of which words of the total corpus are contained in which document and how important they are for the document. The earliest approaches to topic discovery, going back to the 1980-ies, applied purely algebraic considerations to that word-document matrix. The currently predominant approaches to topic discovery can be grouped into two distinctly different lines: One is based on probabilistic generative models where topics are parameters (more specifically: probability distributions on words) that can be determined by statistical inference. The other one is based on transforming the word-document matrix into a network in which the topics show up as communities of strongly linked nodes. We will mention references for some of the existing variants of both lines in the next section.
There is a striking imbalance between the popularity of the two lines. The number of research papers using probabilistic topic models exceeds the number of publications following network-based approaches of topic detection by two orders of magnitude. However, in spite of many impressive successful applications of the probabilistic models it is not at all clear that they offer the best solutions in all situations. An initial investigation of several network-based methods in our group showed a very promising potential and motivated further improvements of network-based topic detection which will be described in this paper.
The structure of the paper is as follows: In the next section we give a brief overview of some of the related work in the areas of topic mining and community detection. Section 3 presents our particular version of network-based topic detection. We exemplify the method by applying it to a well known corpus of BBC news reports which has been widely used for text classification, topic modeling, and other text mining tasks in the literature (e.g., [6, 7]). In Section 4 we investigate the influence of two of the adjustable parameters of the method: the reduction percentage and the resolution parameter, and show how the latter one can be used to identify more and more topics on finer scale. For comparison, in Section 5 we apply the best-known probabilistic topic modeling approach, Latent Dirichlet Allocation (LDA), to our example corpus and elaborate on observations regarding topic interpretability and other differences. Section 6 draws some conclusions.
There are three main new contributions of this paper to the field of topic discovery: First, we describe a particular method for term ranking which is an important ingredient for producing and interpreting high-quality topics; second, we employ for the first time in the context of term co-occurrence networks the Leiden algorithm for a generalized modularity optimization as a tool for topic detection and show that the resolution parameter of the generalized modularity can be used for controlling the resulting topic granularity, which is particularly relevant from a domain expert perspective; third, we present new insight into questions of topic interpretability on the basis of expert evaluations and supported by word embeddings.
2 Related work
Methods of automatic topic detection (as well as other methods used in this article: keyword extraction and word embeddings) are based on thedistributional hypothesis , which states that observations about the distribution of word occurrences allow to draw conclusions about semantics. The first systematic approach to topic detection was Latent Semantic Indexing (LSI)
, based on singular value decomposition of the word-document matrix. Another successful algebraic method usesnon-negative matrix factorization (NMF) .
Going beyond purely algebraic operations, probabilistic Latent Semantic Analysis (pLSA) regards the observed word distribution in the documents as the outcome of a stochastic process that results from the mixture of two multinomial distributions, which can be reconstructed using stochastic inference. Latent Dirichlet Allocation (LDA) follows a similar strategy but goes one step further in assuming that the mixture is not between fixed but random multinomial distributions, which are drawn from a Dirichlet distribution.
LDA has become enormously popular, not least because of several easy-to-use software implementations which employ efficient inference techniques like collapsed Gibbs sampling . It has been applied to many diverse text collections like scientific publications, news collections, literary corpora, political debates, historical documents, social media posts, and many others; for reviews we refer to [14, 15, 16]. On the other hand, LDA motivated the development of a plethora of similar generative models with the aim of improving the method or of taking better account of special properties of the text collections to be studied. An example are generative models which can detect hierarchies of topics [17, 18].
Probabilistic topic models can be further enhanced by supplementing the word co-occurrence information with document metadata, like information on authorship, geographical location or relatedness to events [19, 20]. The Structural Topic Models (STM), which have proven useful in political science applications, also belong to that category .
A fundamentally different line of topic detection methods arose from graph-theoretical evaluation of word-document co-occurrences. We refer to for a survey of graph-based text analysis. So called co-word maps were produced in a semi-manual fashion in early bibliometric studies . TopCat is one of the first graph-based procedures of topic detection. It is based on hypergraph clustering in co-occurrence hypergraphs of so called frequent-itemsets of named entities. Another approach is known under the name KeyGraph. It started as a method for key word extraction based on a word co-occurrence graph on sentence level, but was later extended for event detection [26, 27].
Increased interest in network analysis and in particular the concept of community detection furthered the development of graph-based topic discovery. We refer to [28, 29] for reviews on community detection. Several methods of community detection have been used for topic discovery: finds topics as communities in a KeyGraph by the Girvan-Newman algorithm involving the edge-betweenness . Instead, employs modularity maximization , using the Louvain algorithm ; similar approaches can be found in [35, 36, 37, 38]. Louvain-based community detection was also applied in [39, 40], in combination with a principle component analysis, to co-word maps. The Infomap algorithm for community detection via a random walk was used in . identifies topics as cliques in a word co-occurence network. The hierarchical semantic graph model in is based on a hierarchy of terms and uses subgraph segmentation via the normalized cut algorithm for community detection. finds topics as communities in a bipartite document-word graph with a Stochastic Block Model . This approach establishes an interesting connection to the probabilistic topic models, as a Stochastic Block Model itself is a generative model. In fact, this graph-based topic detection method is closely related to pLSA. On the other hand, Stochastic Block Models also have been shown to be related to maximing a parametrized generalized modularity .
In passing we remark that network analysis is applied to document collections not only in the form of word co-occurrence networks, but also by studying co-author and citation networks, and both, too, have been exploited for topic discovery [50, 51].
Term ranking will play an important role in our approach. On document level, term ranking is closely related to the problem of unsupervised key word extraction. This field is summarized in [52, 53]. On corpus level, we are not aware of any method that is comparable to ours. However, there is a vague resemblance to the method for detecting hot words in microblogs described in .
Word embeddings like Word2Vec are a very efficient way of capturing the contextual information contained in large text collections for use in semantic text analysis. While we use pre-trained fastText embeddings for structuring and assessing the topics that we find by detection of term communities, other authors have used similar word embeddings directly for identifying topics through clustering in the embedding space or indirectly for improving probabilistic topic models .
3 A term-community-based topic discovery method
It is a very intuitive idea that topics within a text corpus show up as patterns in the overall word usage in the corpus documents. Graphs as the mathematical structure for representing networks of entities (so called nodes or vertices) which are linked (through so called edges) are an obvious choice for formalizing this idea. What is less obvious is which of the many possible ways of transforming the corpus into a graph is the most effective one for the present purpose and how exactly topic-forming patterns can be identified.
In this section we describe our particular choice which we found to be successful in the analyses of many text corpora. We give reasons for several specifics in our procedures but do not claim that we have tested all or even a big fraction of the conceivable alternatives. Optimizing model parameters is particularly problematic in the present situation not only because of the extension of the parameter space but also because of the lack of a convincing target function which describes the model success. We will come back to this point in Section 5.
As mentioned before, we use a corpus of 2225 news articles from BBC news dating back to 2004 and 2005 as an example corpus for explaining and evaluating our method. The documents of this corpus each consist of a few hundred words.
3.1 Setting up the corpus network
Based on a corpus of text documents , we define the corpus graph as a weighted undirected graph , consisting of a vertex set, an edge set, and an edge weight function.
The vertex set is a certain subset of unique and normalized words or word combinations appearing in the corpus (which hereafter we will subsume under the name terms), where with is a parameter that controls the number of vertices.
More specifically, for the terms that form the vertex set result from a fairly standard text preparation pipeline, consisting of tokenization, lemmatization (in order to consolidate several inflected forms of the same word), and the removal of stopwords (frequent words with little meaning), short tokens (less than three characters), exotic characters, and tokens consisting mainly of digits. We also retain only nouns, adjectives, and proper nouns as usual in NLP tasks that focus on factual content.
Applying the procedures listed in the previous paragraph to the original documents leads to single-word term vertices only. Yet, retaining compound terms that consist of several words as units in the corpus graph is a desirable enhancement of the method because it prevents the loss of meaning that is a consequence of splitting up a compound term into its components. Technically, we can include compound terms without changing the pipeline described above using a formal trick: Before we put the documents into that pipeline, we connect the individual words of compound terms by underscores (e.g., “department of homeland security” becomes “department_of_homeland_security”). This renders the compound term a single token which survives the pipeline as a unit. However, identifying compound terms in documents is not an easy task. We experimented with various statistical and linguistic approaches. While it is possible to identify many meaningful combinations in that way, one also produces several nonsensical combinations which can create serious confusion in the results. Thus, we decided not to search for general compound terms but only for those which show up in named entity recognition. Concretely, we incorporate entities of the types events, facilities, geographical and political entities, languages, law, locations, nationalities or religious or political groups, organizations, persons, products, and works of art which consist of 2, 3, or 4 words.
For all these document preparation steps, we use the Python library spaCy
. Working with the small language model en_core_web_sm turned out to be sufficient for our purposes; using larger language models did not lead to significant changes in the results.
After these preparations every document has been stripped down to a collection of terms which still carry the main subject content of the document. is the set of all unique terms remaining in the corpus. However, usually quite a few of these terms are of general nature and not important for the main message of the document. Having those terms in the corpus graph blurs its ability to represent thematic links. Working with a smaller subset of , which we denote as where is the percentage of terms retained, can prevent that effect as we will discuss in more detail in Section 4.
In order to judge about which terms to drop from the corpus graphs, a function for document term ranking is needed which produces a rank order of terms depending on their significance for the message of the document. This is a well-known task in the context of key term extraction. The naïve solution would be to rank a term in document by its frequency in the document but it is well-known that this unjustly favors terms that tend to occur frequently, independent of the specific content of the document. The long-standing solution to this problem is to counterbalance the effect of overall frequent words by the inverse document frequency,
where is the number of corpus documents that contain the term , and to use for term ranking in the document .
However, this typical bag-of-word approach—where all words of a document are treated equally independent of their position—neglects the observation that important words of a document are usually not distributed evenly over the whole document. Rather, they tend to appear in groups, and for many document types it is also common that authors place especially many words that characterize the document content at the top of the document, notably in title, subtitle, and abstract. A term ranking method that takes these two observations into consideration is PositionRank
, which is a modification of the TextRank method introduced in in analogy to the PageRank method for ranking within a network of linked web pages.
In order to combine the advantages of the frequency arguments and the positional arguments for term ranking, we devised our own ranking method, posIdfRank , which works as follows: For a corpus document , define a graph (not to be confused with the corpus-wide graph ) which has the set of unique terms of as vertex set , and an edge between two vertices and iff the shortest distance between the terms and within the document is less than or equal to some window size (which is an adjustable parameter of the method).
We now consider a random walk on
in which the transition probability between two termsand is given by
where counts how often in the terms and have a distance less than or equal to the window size , and counts on which position within the document the term appears first; () and () are two more parameters of the method. This process mimics a reader randomly scanning the document for important terms: From term he moves with probability to another term in the vicinity (neighborhood of size )—more likely reaching at terms which commonly stand close to and which do not appear in many documents of the corpus. But with probability (1-) he jumps to some word which can be far away from —then more likely to terms at the beginning of the document (where the preference of terms at the beginning is stronger the smaller one chooses ) and again to terms which do not appear in many documents of the corpus.
The long term behavior of this random walk is characterized by its stationary distribution, a probability distribution on . We regard this function as a useful document term ranking function: It has high values at terms that are frequently visited during the random walk.
For the calculations in this paper, we fix the three parameters of the method at , and . These values were derived from a best fit with the manually assigned key words of the dataset of journal abstracts from , which also showed that the ranking is only weakly sensitive to moderate changes of the parameter values so that it is not critical that other training datasets result in slightly different optimal parameter values.
We illustrate the considerations on document term ranking with the following example document from the BBC corpus:
India widens access to telecoms India has raised the limit for foreign direct investment in telecoms companies from 49% to 74%. Communications Minister Dayanidhi Maran said that there is a need to fund the fast-growing mobile market. The government hopes to increase the number of mobile users from 95 million to between 200 and 250 million by 2007. "We need at least $20bn (£10.6bn) in investment and part of this has to come as foreign direct investment," said Mr Maran. The decision to raise the limit for foreign investors faced considerable opposition from the communist parties, which give crucial support to the coalition headed by Prime Minister Manmohan Singh. Potential foreign investors will however need government approval before they increase their stake beyond 49%, Mr Maran said. Key positions, such as those of chief executive, chief technology officer and chief financial officer are to be held by Indians, he added. Analysts and investors have welcomed the government decision. "It is a positive development for carriers and the investment community, looking to take a longer-term view of the huge growth in the Indian telecoms market," said Gartner’s principal analyst Kobita Desai. "The FDI relaxation coupled with rapid local market growth could really ignite interest in the Indian telecommunication industry," added Ernst and Young’s Sanjay Mehta. Investment bank Morgan Stanley has forecast that India’s mobile market is likely to grow by about 40% a year until 2007. The Indian mobile market is currently dominated by four companies, Bharti Televentures which has allied itself with Singapore Telecom, Essar which is linked with Hong Kong-based Hutchison Whampoa, the Sterling group and the Tata group.
If one would simply go by the frequency of terms in the document, the 10 highest ranked terms would be:
investment, market, foreign, mobile, India, telecom, government, investor, chief, indian
This list obviously does contain useful key terms of the document, but also very unspecific terms like “market” and “chief”. In contrast, the top ranking according to would be:
Maran, investment, telecom, mobile, indian, foreign, India, investor, market, direct
Here, the unspecific terms disappear or get shifted to lower positions. The very specific person name Maran, in contrast, appears at the top of the list.
Finally, the ranking according to posIdfRank results in the following top terms:
India, telecom, investment, foreign, limit, Maran, direct, investor, mobile, Sanjay_Mehta
This list now favors important terms of the title and subtitle, and also brings up person names which stand close to other key terms in the document. Among the three ordered term lists, this is the one which condenses the content of the document best.
Now that we have a document term ranking function we can define the corpus vertex set : It is the result of keeping only the top percent (in the sense of the document term ranking function posIdfRank) of unique terms of each document.
In order to demonstrate the effect of this document reduction, we show what remains from the above example document if we keep only terms from :
India access telecom India limit foreign direct investment telecom Communications Minister mobile market government mobile investment foreign direct investment Maran decision limit foreign investor considerable opposition communist crucial coalition Minister Manmohan_Singh foreign investor government Maran investor government decision investment indian telecom market Gartner principal Kobita_Desai relaxation market indian telecommunication Ernst Young Sanjay_Mehta investment bank Morgan_Stanley India mobile market indian mobile market bharti_televenture Essar hutchison_whampoa Sterling
Having established the vertex set, we now have to specify the edge set and the edge weight function . The edges are supposed to connect terms that are likely to help in identifying thematic relationships, and the edge weight function should measure how significant this connection is. Among all conceivable options, the simplest turns out to be very effective already, namely that two terms are connected if they appear together in at least one document and the weight of that connection is the number of documents in which both terms co-occur:
Several other authors who use network-based topic detection work with more restrictive and complicated edge definitions and weights, involving thresholds and -values in order to avoid noise produced by accidental or meaningless co-occurrences. We did not find this beneficial in our case as we avoided such type of noise already by reducing the vertex set with percentages well below 100.
We will go into more detail concerning the influence of the value of in section 4. In the present section we work with .
More concretely, following the procedures described above, the BBC corpus of 2225 documents leads to a graph that has 23859 vertices and 1896807 edges. The edge weights vary between 1 and 86 (with highest weight on the link between the terms election and Labour, meaning that 86 documents of the corpus contained both these terms).
3.2 Detecting topics as communities
The aim of setting up the graph was that in it topics of would show up as communities of terms, where we refer to the network-theoretical concept of communities, which are—loosely speaking—groups of nodes that are densely connected within each group and sparsely connected with other groups. While this vague idea is quite intuitive, there are various non-equivalent options how to turn it into a quantifiable criterion for identifying communities, some of which we have mentioned in Section 2. It is a priori not clear which criterion fits best for the language related purpose of establishing dense thematic connections. Comparative work in indicates that modularity maximization is very well suited for achieving the goal of topic detection.
Given an undirected weighted graph , modularity is a function which maps a partition of into vertex groups (with ) to a real number which measures how well the groups can be considered to be communities in the sense of high intra-group connectedness. It can be thought of as consisting of two parts. Denoting for a vertex the group of to which belongs by and defining the total edge weight the first part,
represents the fraction of all intra-group edge weights compared to the total edge weight of , where we have used Kronecker’s notation in order to express that and should be in the same group. The second part is the same fraction, but not for the graph at hand, , but what one would expect for a random graph that has the same degree distribution (i.e., edge weight sum connected to each vertex) as :
with , the edge weight sum (degree) at vertex .
Modularity now is the difference between these two terms:
Maximizing the modularity therefore means finding a partition of the vertex set such that within the groups of this partition the fraction of intra-group edges is as big as possible when compared to the fraction of intra-group edges which one would expect in a similar random graph. At least intuitively, this translates well into what we are looking for: communities of terms that appear more often together in common documents than one would expect if the terms were randomly distributed.
The above definition of modularity can be generalized to include a parameter like follows:
Maximizing leads to coarser or finer communities depending on the value of : In the extreme situation that , the objective function is just , and its maximum is obviously reached at the trivial solution , i.e., when the whole graph is considered to be one big community. In the other extreme, , the objective is to minimize , and this obviously happens when , i.e., when each single vertex forms its own community such that the number of communities is equal to the number of vertices.
Varying in the generalized modularity makes it possible to find communities of variable granularity, and therefore is called the resolution parameter.
Computationally, maximizing the (generalized) modularity is known to be a non-deterministic polynomial-time hard problem, which means that there are no efficient algorithms that guarantee an optimal solution. However, several efficient heuristic algorithms are known for producing potentially suboptimal but useful solutions.
Here, we use the recently published Leiden algorithm , which is an improvement of the very popular so called Louvain algorithm for maximizing modularity. Starting from the extreme partition where each vertex forms its own community, the Leiden algorithm first visits the nodes in a random order and tries greedily to shift nodes to other communities in a way that offers the biggest modularity increases. In a second step (which is the main improvement compared to the Louvain algorithm), the community partition is refined in a way that produces well-connected communities. After that, an aggregated graph is formed which contains the refined communities of the original graph as vertices. The whole procedure is repeated for the aggregated graph, and this is iterated until no further modularity increase can be achieved.
For our calculations we use the implementation of the Leiden algorithm in the Python version of the library igraph .
Applying the Leiden algorithm with the standard resolution parameter to the example graph results in 8 to 14 term communities, depending on the run. Each run may produce a different number of communities because the algorithm follows a non-deterministic heuristics. Therefore, not only the number but also the terms of the communities, which are only approximations to the optimal community partitioning, may change in each run. However, closer inspection shows that the differences between the runs are not large. In particular, if a run produces more than 8 communities, the additional communities are strikingly smaller than the biggest 8 communities. This is an indication that in some runs the greedy algorithm fails to assign some terms to any of the dominant communities and leaves them in some small residual communities.
While these odd cases are reasonably easy to detect, it is nevertheless better to remove those undecisive terms completely from the picture. Therefore, for the actual topic detection we use 20 runs and consider only those sets of terms which end up together in the same Leiden community in at least 15 of the runs. Finally, of those sets we retain only the ones that contain at least 10% of the terms that one would expect in a topic if all topics were of equal size. In this way, the remaining sets of terms form stable topics.
Following this procedure, 22426 terms among the 23859 vertices of (or 94%) can be assigned to one of 8 stable topics. The biggest of these topics contains 4657 terms, the smalles 214 terms.
Based on the assignment of terms to topics, we can also determine which document of the corpus is concerned with which topics. The connection is made by counting the relative number of topic terms within a document: If is the number of terms in document that belong to the topic cluster , then is a good indicator of the importance of topic for that document. While one can argue that here a probabilistic topic model would offer a sounder method for calculating the topic share per document, we found that a simple count of topic terms works very well as we will show when we use the term community topics for document classification in Section 4.
Like with other topic detection approaches, the method results in a list of terms that characterize a topic, but the actual interpretation of what that topic is about, is left to human evaluation. It is certainly a difficult task to look at hundreds or thousands of terms, having to make sense of what topic might be encoded in them. Probabilistic topic models that produce topics as probability distributions on terms have an advantage here at first glance: The topic terms can be sorted by their probability, and one usually looks only at the 10 to 30 most probable terms.
In the next subsection we will explain how we suggest to look at big sets of topic terms in a way that facilitates interpretation in the absence of a probability distribution on the terms.
3.3 Topic presentation
Looking in a random order at the hundreds or thousand terms which constitute a topic is certainly not helpful for grasping its meaning. It would be best to look at the most characteristic terms first; what we need is a term ranking. In the context of network preparation we have already established a document term ranking . However, what is required now is a corpus term ranking. The former one can only decide which terms are the most characteristic ones for a certain document , but now we need a more global ranking function , independent of . For a term , the document term ranking function is only defined for those documents which contain . Since the document frequency varies a lot with it would not be fair to simply take the arithmetic average of the existing values of . This resembles star ratings in recommender systems where typically some items have been rated many times and some items have only one rating. There one uses a Bayesian average for ranking .
In order to transfer that solution to the problem of corpus term ranking, we first introduce a discretized version of the document term ranking . This is because we do not claim that the document term ranking is sufficiently exact to measure the importance of terms continuously but rather that it is a good base for grouping terms into sets of more or less important terms. Let be an ordered sequence of the terms in document , for and if . Then we divide into parts , , of equal lengths . We also introduce a cut-off value ; and are parameters which can be adjusted so as to result in a ranking that works well.
and otherwise. After some experimentation we fixed and ; this means that the top 5% of terms in a document get the discretized rating 3, the next 5% the rating 2, and the third 5% the rating 1.
Based on , we calculate the corpus term ranking function as the following Bayesian average:
with , which is the document frequency averaged over all terms, and , which is the mean ranking of all terms.
With the function we can sort all terms of the corpus.
The ten terms which ranked highest as the most specific terms for the corpus are:
Yukos, Holmes, UKIP, Fiat, Blunkett, howard, Yugansk, Kenteris, Parmalat, Wenger
The ten terms ranked lowest—the least specific ones—are:
year, place, time, month, spokesman, recent, week, Tuesday, long, second
Now we can present the terms that form a topic in the order of decreasing values of . The examples in Table 1 show how advantageous this is. Both columns show terms from the same topic found by community detection in with resolution parameter . The topic comprises a total of 4080 terms. The left column of the table shows just 18 randomly picked topic terms. They do not give a clear indication about the meaning of the topic. In contrast, the right column shows the 18 topic terms with highest values of . They clearly give a much better impression of what the topic is about.
|random order||r-sorted order|
However, looking at only one or two dozens of 4080 identified topic terms wastes a lot of information, and we suggest to look at many more terms when interpreting topics in order to achieve a proper assessment. In order to facilitate an overview over many topic terms, next to the specificity ranking we add another structuring criterion to the set of topic terms by grouping them into clusters of semantically related terms. Here, we make use of pretrained fastText embeddings
. The fastText approach belongs to the semantic work embedding methods through which words can be mapped to a moderately low-dimensional Euclidean vectorspace in a way that semantic closeness translates into small metric distances. The pre-trained fastText models are shallow neural nets with output vectors of dimension 300 that were trained on huge text collections from Wikipedia and Common Crawl, using the so called CBOW task of predicting a word by its surrounding words. In distinction to its predecessor Word2Vec, fastText internally does not work on the level of words but on the level of its constituting character n-grams. In the present context this offers two advantages: First, this mapping works even on words which do not appear in the Wikipedia and Common Crawl collections; second, word variations due to spelling mistakes or imperfect lemmatization usually end up close to each other in the vector space representation.
If we now take the vector representations of the topic terms, we can use any metric-based clustering method for finding groups of semantically related words, or, more precisely, of words whose components have been seen frequently together in huge text collections. After some experimentation, we decided to use hierarchical clustering in its scikit-learn implementation AgglomerativeClustering with distance threshold 1.
We show these semantic groups of terms, which we call semantic strata, rather than single terms, when we present the topics. We order the strata by the value of for the terms per stratum. As a result we have a two-dimensional order in the topic terms: one dimension ranking the specificity and one dimension depicting semantic relations.
For topic evaluation we produce large sheets with the topic terms structured in the stratified way described above. In Table 2 we show only the beginning of such a sheet for better comprehension. The rows depict the strata in which the -sorted top terms of the topic get accompanied by semantically related terms, or in the case of person names by persons who usually appear in a common context.
4 The influence of the resolution parameter and of the reduction percentage
In this and the following section, we present concrete observations derived from working with the BBC corpus. The general aim is to study the influence of modeling decisions and parameters on structure, interpretability, and applicability of the detected topics.
All topic term sheets produced in the way described in Section 3, with various values of the parameters and discussed in the following, were given to three evaluators from the social sciences with varying backgrounds in political science, economics, and sociology. Each topic was assessed independently by two of them: The task was to interpret the topic and to find an appropriate label for it. The evaluators also graded the interpretability of the topic from score 1 (= hardly possible to make sense of the topic) to score 5 (= topic recognizable without any doubts). The third evaluator compared the two previous evaluations and prepared a meta evaluation. In the majority of cases, the two first evaluators gave identical or nearly identical labels, which were almost always approved by the meta evaluator. In most other cases, it was possible to agree on a consensus label; in very few cases, the evaluators found the topics unidentifiable. The average of the three evaluator scores was given as final score.
4.1 Varying the resolution
Next, we study the influence of the resolution parameter at the example of the BBC corpus, rank-reduced to 50% of its terms.
Table 3 shows the resulting number of topics for different values of ; some of which were chosen for later comparison in Section 5. As expected, rises with . The table also shows that the evaluators generally rated topic interpretability, expressed through the average score , as high. However, there is a clear trend that interpretability declines with higher resolutions.
Up to resolution , there are hardly any topics that were difficult to interpret, but at , 15% of the topics were found to be problematic. Nevertheless, it is remarkable that the method succeeds in producing clearly interpretable topics.
We will come back to issues of interpretability in Section 5 but first discuss content aspects of increasing the resolution. We start with looking closer at the topics at resolution .
The BBC corpus comes with a classification into 5 broad classes: Business, Entertainment, Politics, Sport, Tech. Resolution produces 5 topics, labeled by the evaluators as follows: Sports, Music & films, Technology, Politics (UK interests based), Economy. The congruence between classes and topics is obvious on the level of labels. In order to see how well this extends to the document level, we calculate the topic shares in each document. The topic with the highest share we call the dominant topic of the document. In this way we compile the crosstable Table 4 between preassigned classes and detected dominant topics.
|Topic||Economy||Music & films||Politics||Sports||Technology|
The corresponding classification statistics is shown in Table 5
. However, what we have in the detected topic distribution is more than a simple classifier, as we do not only learn which is the dominant topic of a document but also what other topics are visible in a document. For instance, the example document about foreign investment in Indian telecom presented in Section3 belongs to the class Business. But in the topic term assignment, while Economy is the dominant topic with a share of 59%, there is also a share of 25% Politics and of 14% Technology, which is a reasonable topic composition for the document. In fact, several of the few cases which were misclassified based on their dominant topic were borderline articles with two similarly strong appropriate topics. The following titles give some examples: The article “News Corp eyes video games market” belongs to class Business, but Technology was detected as dominant topic. The Entertainment article “Ethnic producers face barriers” has Politics as dominant topic. The article “Arsenal may seek full share listing” is in class Business, but the topic Sports dominated here.
). Precision is the fraction of true positives among all positive predictions, recall is the fraction of true positives compared to all actual class members, and the f1-score is their harmonic mean. Values are given for each class separately and as an average weighted according to the sizes of the classes.
We are now interested in what topics show up when we increase the resolution parameter . Resolution results in 8 topics. The heat map in Figure 1 shows how the topic terms of these 8 topics—corresponding to the 8 rows—are distributed within the 5 topic of lower resolution —corresponding to the 5 columns. While the 5 original topics basically persist, 3 of them have smaller spin-offs: The general Sports topic gave rise to a new Athletics topic, from the Music & films topic a new TV topic splits off, and the Technology topic forks into a new (Video) Gaming topic.
This phenomenon of small topics splitting off from a big one is a typical pattern contributing to the increasing number of topics at higher values of . The comparison of resolution with 19 topics and with 8 topics in Figure 2 shows further examples. The topic Politics decomposes into the big topic UK politics and small topics Terrorism, Euroscepticism, and Nutrition & health. However, two more topics have significant contributions from the former Politics topic: Labour, which has also input from the former Economy topic, and Cybersecurity, which is primarily fed by the former topic Technology.
Another example illustrates that direct topic division is not the only way in which topics of higher resolution emanate. The high-resolution topics Music, Movies/cinema, Television, Marvel comics/movies altogether arise from a joint restructuring of the two coarser topics Music & films and TV (and some faint influences of other topics).
Yet another form of recombination can be seen in the class Sport. At , it comprises two topics: Sports and Athletics. At , there are still only two sport topics: Ball sports (with an emphasis on football and rugby) and a combined topic Tennis & athletics. This means that here the higher resolution splits off Tennis from the general Sports topic but immediately combines it with the Athletics topic.
Going from to , we end up with 27 topics (see Figure 3). Here, remarkable developments are that the business and investment topics give now rise to 4 related topics, 2 of them now getting into regional or sector details (Development Asia and Aviation). Also the sport topics get more specific: Football and Rugby are separate topics now.
Continuing the route to and , further specific topics can be detected: the separation of Tennis and Athletics, various country specific topics, additional industry sectors like the Automotive industry (see Table 6). All in all, this confirms the role of as resolution parameter. However, as topics that are not easy to interpret become more frequent (see Table 3), one cannot reach arbitrarily high resolution.
which can be interpreted as Automotive. The terms in this clipping are dominated by car makes (LVMH being an outlier, though one can imagine why it is included in the same cluster). While lower parts of the sheet not visible here also contain general terms from the subject field, such as “motorsport”, “sportscar”, “racing”, “braking”, “steering”, “throttle”, and “gearbox”, the specificity ranking puts the makes into higher position.
4.2 The effect of term reduction
In all of the above examples, we have used , i.e., the BBC corpus with of its ranked terms. Here, we briefly investigate what happens when we work without reduction (), or if we reduce much more ().
With reference to Table 7, we discuss the effect on topic number and topic interpretability. At low resolution (), we observe that increasing reduction (decreasing ) gives rise to a larger topic number. This is easily understandable as in the term community approach common terms are the glue which binds the community. Having more but less specific terms supports the formation of bigger clusters. On the other hand, smaller numbers of terms make big clusters unlikely and rather produce larger numbers of smaller topics.
While the lack of terms clearly is problematic for the topic interpretability at and , the case and works well as it reproduces the 5-class structure of the corpus again—like for and . In fact, if the only purpose of topic detection was classification with respect to the coarse 5 classes of the corpus, the choice and would be preferable, as a comparison of Tables 8 and 9 with Tables 4 and 5, respectively, shows. However, the unreduced wins its slightly better ability to predict the dominant topic at the cost of recognizing secondary topics less well.
At higher resolution, , the advantages of a moderate reduction of topic terms are clearly visible: and produce a comparable number of well interpretable topics as , but both extreme choices also yield many uninterpretable topics. These typically consist of few topic terms. In the case of , many unspecific terms are involved, whereas in the case of the topic terms are so narrow that they seem to belong to one document only.
Term reduction is about finding the right balance between removing as many unspecific terms as possible and keeping enough terms for characterizing topics in detail. While there is no practicable way to predict the optimal value of , we can provide some guidelines, based on the results presented here as well as on findings from further tests and work with different text corpora. These include summaries of scientific studies and parliamentary documents , but also abstracts of scientific articles, RSS news feeds, and mixed corpora. More generally, the longer the documents are, the stronger the reduction (i.e., the lower ) should be. More specifically, for articles with one or two pages, reductions between 50% and 25% work—by and large equally—well. Short texts like abstracts work better with less reduction, whereas for long documents with many pages reduction between 10% and 20% is helpful, also as it decreases the term network size and consequently the computational effort.
5 Topic interpretability and comparison with LDA
The term community method for topic detection described above is geared towards good topic interpretability. The results presented in Section 4 confirm that evaluators found the topics uncovered by the method to be of high quality in that sense. Before we look in more detail into the factors that determine the topic interpretability we apply standard Latent Dirichlet Allocation to the same corpus for the sake of comparison.
5.1 LDA topics for the BBC corpus
LDA is based on the assumption of a generative process where topics are considered to be probability distributions over all words of the corpus, which are not directly observable but latent in the documents in that each document is a random mixture of topics. The word distributions within the topics as well as the topic distributions within the documents are assumed to have the form of a Dirichlet distribution. Fixing the number of topics within the corpus and further hyper parameters that determine the shape of the Dirichlet distribution, methods of statistical inference can be used to determine the latent distributions of words per topic from the observed distributions of words per document. In particular, the method of Gibbs sampling is known to produce convincing topics in many applications.
This motivates a comparison of the topics detected as term communities with the topics identified using LDA. For generating these topics, we used the popular Mallet LDA toolkit through the Python wrapper contained in the library Gensim . We fixed the topic number to values described below and used the defaults for all other parameters. This means that further tuning might improve the results shown below, but also that we compare with the typical way in which LDA is used in the applied literature.
The resulting topics were presented to the evaluators as lists of terms, sorted by their probability, where only topics with a probability greater than 0.001 were shown. The labeling and evaluation process was carried out in the same way as for the term communities. Table 10 shows how the evaluators graded the topics for 5 different values of . For reasons of comparison we also include rows from Table 3 for term communities that resulted in the same number of topics. Altogether, the table shows that the evaluators found most LDA topics well interpretable, but the scores are consistently below the results for the term community method.
The evaluators report that cognitive processing of stratified term clusters (with a significant proportion of named entities), as they were presented in the case of term communities, appears prima facie more complex than that of word lists (with more general terms), as in LDA. After familiarizing with both ways of presentation through interpreting a couple of topics, however, more information at a glance and more details eventually increase the interpretability of topics. In particular, this results in fewer unidentifiable topics and better differentiability between topics. Furthermore, while interpreting topics from both methods becomes more difficult with increasing topic numbers, this effect is stronger for LDA.
Conversely, stratified word clusters including a decent amount of named entities were considered very fruitful from a domain application perspective: Generally speaking, more, and more detailed, information (actors, issues, places, time references, aspects) about the topics and the corpus as such is beneficial for most social science purposes, and hardly ever an impediment.
The clearest results are obtained at , corresponding to the number of preassigned classes. This model can be evaluated as a classifier in the same way we did for term community topics at and at . The results are presented in Tables 11 and 12
. Precision and recall are high, but clearly below the values for the term community method.
|Topic||Economy||Music & films||Politics||Sports||Technology|
In fact, with typical LDA reasoning one would argue that one should not work with at all but rather tune
such as to achieve an optimal result. Very often optimization of LDA hyperparameters is understood to be targeted at maximizing coherence of the topics. According to, one of several coherence measures, usually called , is especially well correlated with human evaluation of coherence. It is calculated from the co-occurrence statistics of topic words within text windows in the corpus documents. Table 10 shows this value for the various LDA models. In the present situation, reaches its maximum at with another local maximum at ; and the two higher values are clearly suboptimal with respect to . Our evaluators’ scores are not convincingly correlated to .
It is tempting to draw from the results presented so far the conclusion that LDA and term communities offer two options for finding the same topics, both normally working well, with slight advantages concerning interpretability on the side of the term communities. Instead, further analysis of the term composition of the topics shows that both methods find different topics. This can be seen in the heat map in Figure 4. The columns show the LDA topics found at , the rows display the term communities for . The heat map grayscale indicates how many of the most probable terms of an LDA topic come from which term community. If there was a 1-to-1 correspondence of topics, the heat map would only show exactly one single dark square for every row and every column. However, this is not the case. Rather, there are LDA topics that extend over several term communities (e.g., Entertainment comprising Music and Movies), and there are term communities that contain several LDA topics (e.g., Technology & (video) gaming containing Consumer electronics and Computer & Internet). In LDA, Sports is a very broad topic, Financial market is a very narrow topic. In term communities, Economy and UK politics are very broad, while the communities for TV and for (Video) Gaming are so small that they do not contain any of the most likely LDA terms.
Altogether, LDA and term communities seem to offer complementary views of the subjects discussed in the corpus.
5.2 Factors influencing the topic interpretability
Based on the evaluation of the interpretability of many topics produced with two totally different methods of topic evaluation, we want to look into factors which make the difference between topics that can be recognized easily and topics that defy interpretation. Therefore, we asked the evaluators in each case of a topic with low score what caused the difficulties with interpretation. One obvious reason which effected both—topics presented as term communities as well as LDA topic term distributions—was that topic terms seemed to point into contradictory directions. However, there were two other equally important reasons: First, some topics consisted of only a few dozen terms without projecting a clear picture about a common theme—this was a relatively frequent problem for the term community method; second, there were topics that showed almost exclusively generic terms, which lacked expressiveness and made it impossible to recognize a specific theme—this is a common problem for LDA. These two effects can be traced back to a common root: the lack of informative terms. To put it differently: A topic does not only need non-contradictory but also informative terms in order to be easily interpretable.
The corpus term ranking which we established in Section 3 offers a way to separate informative from non-informative terms—also when talking about terms in the LDA term distributions: We consider a term only as informative if its ranking value is higher than a certain threshold. For the BBC corpus it works well to define the set of highly informative topic terms of topic as . In order to assess the degree of informative terms in a topic, we simply count .
Assessing the risk of contradictions within the highly informative terms can be done by some form of coherence measure. Rather than using a self-referential intrinsic measure that compares to co-occurrences within the corpus documents or working with a larger external corpus, we suggest to use a word embedding coherence as defined in which we base here on the same fastText embedding that we have used already in Section 3, as it derives relatedness of terms from an extremely large text collection:
is the cosine similarity between the two fastText vectors forand .
Figure 5 shows how topics which were evaluated as difficult for interpretation are positioned with respect to the two measures (amount of informative terms) and (consistency of informative terms). Included are 85 topics from term community detection ( with and ) and 85 topics from LDA ( and ). Topics with good interpretability are colored gray, the 20 topics for which the evaluators gave low scores are colored black. Term community topics are marked as circles, LDA topics as diamonds. Obviously, all problematic topics are gathered in the lower left quadrant of relatively small coherence and small term number, confirming the above hypothesis about the two factors that can cause difficulties for interpretation.
This also shows that the coherence measure alone is not a criterion which can predict topic interpretability, at least not when comparing different methods of topic detection. In fact, even a combination of coherence and term number cannot tell whether or not evaluators will find it easy to recognize the meaning of a topic. There are several examples in the left lower quadrant where topics were clear. But producing topics with many informative terms and considerable word embedding coherence does reduce the risk of ending up with meaningless topics.
6 Conclusions and outlook
Term community detection using parametrized modularity in a rank-reduced term co-occurrence network results in topics that are nearly always easy to interpret by domain experts. This observation is not only substantiated by the extensive studies on the BBC news corpus presented here but also by experience with numerous other corpora that our group monitors in the course of strategic analyses: political documents, scientific publications, government notifications, research news, general news. The ability to produce topics on different resolution levels by varying a continuous parameter is a feature that is particularly relevant from a domain expert perspective since text corpora are assumed to contain topics on several levels of granularity. For example, one might be interested in broader thematic areas (like health policy), discourses within these areas (like Corona/COVID-19), subjects within these discourses (like vaccinations), issues within these subjects (like vaccine prioritization) or even events/incidents within these issues (like deviations from prioritization plans in a particular vaccination center).
Our special form of term ranking on corpus level and additional clustering in word embedding space allow a comprehensive but lucid presentation of topic terms which supports topic interpretation. Along this line we have developed TeCoMiner , a software tool for interactively investigating topics as term communities. There we take advantage of the fact that computing time for finding term communities is relatively small, faster than running a sufficient number of Gibbs sampling iterations for LDA.
Term community detection is one of many methods for discovering topics in large text collections. While it is natural to ask which of those methods works best, one can hardly expect a clear answer to this question: One reason is that no quantitative criterion is known that adequately predicts the human interpretability of a topic; as we have seen, topic coherence alone is certainly not sufficient. Another reason derives from the fact that certain methods are known to work better or worse depending on some properties of the corpus—a one-suits-all method might not exist. This is particularly true for generative topic models which always are based on quite specific and hardly verifiable assumptions about modeling details like the conditional dependence structure and prior distributions.
The phenomenological approach of term community detection however has worked reasonably well for all corpora we have tried and thus can be recommended for initial overview but also for deeper insight with higher resolution when investigating unknown corpora. Nevertheless, we saw when comparing with LDA topics that it certainly is interesting to run different methods on the same corpus as this may produce complementary topics.
The fact that different topic detection methods find different topics can be compared to a recent observation for community detection , where typically the communities also show considerable variation depending on the method used. There it was pointed out that all the differing communities can still be seen as different arrangements of the same building blocks. The same might be true for topics: They may be conceived of as arrangements (of elementary events, incidents, and concepts) that emerge with the corpus itself, rather than as pre-existing ideas from which the authors of documents could choose at the time of writing. The various topic detection methods search for such arrangements in different ways, thereby shedding light on the discourse underlying the corpus from distinct, but most likely complementary, perspectives.
Regarding future work, it will certainly be worthwhile amalgamating the network-theoretical approach to topic detection with other network-theoretical text mining procedures like co-authorship or citation networks, and with knowledge graphs.
The method presented here is likely to be particularly well suited for applications in the social sciences, especially as it produces both informative and well interpretable topics on different levels of thematic granularity. It should therefore be considered as a promising alternative to probabilistic topic modeling.
We are deeply grateful to Jana Thelen, whose work on a predecessor version of the method described here motivated and shaped several of its details. Special thanks go to Rasmus Beckmann and Mark Azzam for fruitful discussions and constant support. We also acknowledge the domain expert support of Friderike Uphoff and Stefan Odrowski in evaluating the topics.
- Burt L. Monroe and Philip A. Schrodt. Introduction to the special issue: The statistical analysis of political text. Political Analysis, 16(4):351–355, 2008.
Henry E. Brady.
The challenge of big data and data science.Annual Review of Political Science, 22(1):297–323, 2019.
- Ken Benoit. Text as data: An overview. In The SAGE Handbook of Research Methods in Political Science and International Relations, pages 461–497. SAGE Publications Ltd, 2020.
- Jana Thelen. Methoden der Netzwerkanalyse im Topic Modeling. Master’s thesis, Department of Mathematics and Computer Science, University of Cologne, 2020. https://elib.dlr.de/141146/.
- Derek Greene and Pádraig Cunningham. Practical solutions to the problem of diagonal dominance in kernel document clustering. In Proceedings of the 23rd international conference on Machine learning - ICML 06. ACM Press, 2006.
- V.S. Anoop, S. Asharaf, and P. Deepak. Unsupervised concept hierarchy learning: A topic modeling guided approach. Procedia Computer Science, 89:386–394, 2016.
Svetlana S. Bodrunova, Andrey V. Orekhov, Ivan S. Blekanov, Nikolay S.
Lyudkevich, and Nikita A. Tarasov.
Topic detection based on sentence embeddings and agglomerative clustering with markov moment.Future Internet, 12(9):144, 2020.
- Zellig S. Harris. Distributional structure. WORD, 10(2-3):146–162, 1954.
- Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6):391–407, 1990.
- Wei Xu, Xin Liu, and Yihong Gong. Document clustering based on non-negative matrix factorization. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval - SIGIR 03. ACM Press, 2003.
- Thomas Hofmann. Probabilistic latent semantic indexing. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval - SIGIR 99. ACM Press, 1999.
- David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993–1022, 2003.
- T. L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy of Sciences, 101(Supplement 1):5228–5235, 2004.
- David M. Blei. Probabilistic topic models. Communications of the ACM, 55(4):77–84, 2012.
- Jordan Boyd-Graber, Yuening Hu, and David Mimno. Applications of topic models. Foundations and Trends® in Information Retrieval, 11(2-3):143–296, 2017.
- Hamed Jelodar, Yongli Wang, Chi Yuan, Xia Feng, Xiahui Jiang, Yanchao Li, and Liang Zhao. Latent dirichlet allocation (LDA) and topic modeling: models, applications, a survey. Multimedia Tools and Applications, 78(11):15169–15211, 2018.
- David M. Blei, Michael I. Jordan, Thomas L. Griffiths, and Joshua B. Tenenbaum. Hierarchical topic models and the nested chinese restaurant process. In Proceedings of the 16th International Conference on Neural Information Processing Systems, NIPS’03, page 17–24, Cambridge, MA, USA, 2003. MIT Press.
- Justin Grimmer. A bayesian hierarchical topic model for political texts: Measuring expressed agendas in senate press releases. Political Analysis, 18(1):1–35, 2010.
- Jianwen Wang, Xiaohua Hu, Xinhui Tu, and Tingting He. Author-conference topic-connection model for academic network search. In Proceedings of the 21st ACM international conference on Information and knowledge management - CIKM 12. ACM Press, 2012.
- E. Yan, Y. Ding, S. Milojevic, and C. Sugimoto. Topics in dynamic research communities: An exploratory study for the field of information retrieval. J. Informetrics, 6:140–153, 2012.
- Margaret E. Roberts, Brandon M. Stewart, Dustin Tingley, Christopher Lucas, Jetson Leder-Luis, Shana Kushner Gadarian, Bethany Albertson, and David G. Rand. Structural topic models for open-ended survey responses. American Journal of Political Science, 58(4):1064–1082, 2014.
- S. S.Sonawane and P. A. Kulkarni. Graph based representation and analysis of text document: A survey of techniques. International Journal of Computer Applications, 96(19):1–8, 2014.
- A. Rip and J. P. Courtial. Co-word maps of biotechnology: An example of cognitive scientometrics. Scientometrics, 6(6):381–400, 1984.
- Chris Clifton and Robert Cooley. TopCat: Data mining for topic identification in a text corpus. In Principles of Data Mining and Knowledge Discovery, pages 174–183. Springer Berlin Heidelberg, 1999.
- Y. Ohsawa, N. E. Benson, and M. Yachida. Keygraph: automatic indexing by co-occurrence graph based on building construction metaphor. In Proceedings IEEE International Forum on Research and Technology Advances in Digital Libraries -ADL’98-, pages 12–18, 1998.
- Yukio Ohsawa. KeyGraph: Visualized structure among event clusters. In Chance Discovery, pages 262–275. Springer Berlin Heidelberg, 2003.
- H. Wang, F. Xu, X. Hu, and Y. Ohsawa. Ideagraph: A graph-based algorithm of mining latent information for human cognition. In 2013 IEEE International Conference on Systems, Man, and Cybernetics, pages 952–957, 2013.
- Santo Fortunato and Darko Hric. Community detection in networks: A user guide. Physics Reports, 659:1–44, 2016.
- Sanjay Kumar and Rahul Hanot. Community detection algorithms in complex networks: A survey. In Communications in Computer and Information Science, pages 202–215. Springer Singapore, 2021.
- Hassan Sayyadi and Louiqa Raschid. A graph analytical approach for topic detection. ACM Transactions on Internet Technology, 13(2):1–23, 2013.
- M. Girvan and M. E. J. Newman. Community structure in social and biological networks. Proceedings of the National Academy of Sciences, 99(12):7821–7826, 2002.
Shanliang Yang, Qi Sun, Huyong Zhou, Zhengjie Gong, Yangzhi Zhou, and Junhong
A topic detection method based on KeyGraph and community partition.
Proceedings of the 2018 International Conference on Computing and Artificial Intelligence - ICCAI 2018. ACM Press, 2018.
- M. E. J. Newman. Modularity and community structure in networks. Proceedings of the National Academy of Sciences, 103(23):8577–8582, 2006.
- Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment, 2008(10):P10008, 2008.
- Michael D. Salerno, Christine A. Tataru, and Michael R. Mallory. Word community allocation: Discovering latent topics via wordco-occurrence network structure. http://snap.stanford.edu/class/cs224w-2015/projects_2015/Word_Community_Allocation.pdf, 2015.
- Henrique F. de Arruda, Luciano da F. Costa, and Diego R. Amancio. Topic segmentation via community detection in complex networks. Chaos: An Interdisciplinary Journal of Nonlinear Science, 26(6):063120, 2016.
- Tommy Dang and Vinh The Nguyen. ComModeler: Topic Modeling Using Community Detection. In Christian Tominski and Tatiana von Landesberger, editors, EuroVis Workshop on Visual Analytics (EuroVA). The Eurographics Association, 2018.
- Minjun Kim and Hiroki Sayama. The power of communities: A text classification model with automated labeling process using network community detection. In Proceedings of NetSci-X 2020: Sixth International Winter School and Conference on Network Science, pages 231–243. Springer International Publishing, 2020.
- Loet Leydesdorff and Adina Nerghes. Co-word maps and topic modeling: A comparison using small and medium-sized corpora (n 1,000). Journal of the Association for Information Science and Technology, 68(4):1024–1035, 2016.
- Tobias Hecking and Loet Leydesdorff. Can topic models be used in research evaluations? Reproducibility, validity, and reliability when compared with semantic maps. Research Evaluation, 28(3):263–272, 2019.
- M. Rosvall, D. Axelsson, and C. T. Bergstrom. The map equation. The European Physical Journal Special Topics, 178(1):13–23, 2009.
- Andrea Lancichinetti, M. Irmak Sirer, Jane X. Wang, Daniel Acuna, Konrad Körding, and Luís A. Nunes Amaral. High-reproducibility and high-accuracy method for automated topic classification. Physical Review X, 5(1), 2015.
- Wu Wang, Houquan Zhou, Kun He, and John E. Hopcroft. Learning latent topics from the word co-occurrence network. In Communications in Computer and Information Science, pages 18–30. Springer Singapore, 2017.
- Tingting Zhang, Baozhen Lee, Qinghua Zhu, Xi Han, and Edwin Mouda Ye. Multi-dimension topic mining based on hierarchical semantic graph model. IEEE Access, 8:64820–64835, 2020.
- Jianbo Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905, 2000.
- Martin Gerlach, Tiago P. Peixoto, and Eduardo G. Altmann. A network approach to topic models. Science Advances, 4(7):eaaq1360, 2018.
- Brian Karrer and M. E. J. Newman. Stochastic blockmodels and community structure in networks. Phys. Rev. E, 83:016107, 2011.
- M. E. J. Newman. Equivalence between modularity optimization and maximum likelihood methods for community detection. Physical Review E, 94(5), 2016.
- Jörg Reichardt and Stefan Bornholdt. Statistical mechanics of community detection. Physical Review E, 74(1), 2006.
- Jia Zeng, William K. Cheung, Chun hung Li, and Jiming Liu. Coauthor network topic models with application to expert finding. In 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology. IEEE, 2010.
- Zhen Guo, Zhongfei Zhang, Shenghuo Zhu, Yun Chi, and Yihong Gong. Knowledge discovery from citation networks. In 2009 Ninth IEEE International Conference on Data Mining. IEEE, 2009.
- Sifatullah Siddiqi and Aditi Sharan. Keyword and keyphrase extraction techniques: A literature review. International Journal of Computer Applications, 109(2):18–23, 2015.
- Nazanin Firoozeh, Adeline Nazarenko, Fabrice Alizon, and Béatrice Daille. Keyword extraction: Issues and methods. Natural Language Engineering, 26(3):259–291, 2020.
- Zhiwen Yu, Zhitao Wang, Liming Chen, Bin Guo, and Wenjie Li. Featuring, detecting, and visualizing human sentiment in chinese micro-blog. ACM Transactions on Knowledge Discovery from Data, 10(4):1–23, 2016.
- Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. https://arxiv.org/abs/1301.3781, 2013.
- Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146, 2017.
- Andrei M. Butnaru and Radu Tudor Ionescu. From image to text classification: A novel approach based on clustering word embeddings. Procedia Computer Science, 112:1783–1792, 2017.
- Rajarshi Das, Manzil Zaheer, and Chris Dyer. Gaussian LDA for topic models with word embeddings. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, 2015.
- Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. spaCy: Industrial-strength Natural Language Processing in Python, 2020.
- Corina Florescu and Cornelia Caragea. PositionRank: An unsupervised approach to keyphrase extraction from scholarly documents. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2017.
- Rada Mihalcea and Paul Tarau. TextRank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 404–411, Barcelona, Spain, 2004. Association for Computational Linguistics.
- Sergey Brin and Lawrence Page. The anatomy of a large-scale hypertextual web search engine. Comput. Networks, 1998.
- Andreas Hamm. Complex word networks - comparing and combining information extraction methods. https://elib.dlr.de/127501/, May 2019. Contributed to SPCS2019, Stockholm.
- Anette Hulth. Improved automatic keyword extraction given more linguistic knowledge. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 216–223, 2003.
- V. A. Traag, L. Waltman, and N. J. van Eck. From Louvain to Leiden: guaranteeing well-connected communities. Scientific Reports, 9(1), 2019.
- Gabor Csardi and Tamas Nepusz. The igraph software package for complex network research, 2006.
- Xiao Yang and Zhaoxin Zhang. Combining prestige and relevance ranking for personalized recommendation. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management - CIKM 13. ACM Press, 2013.
- Andreas Hamm, Jana Thelen, Rasmus Beckmann, and Simon Odrowski. TeCoMiner: Topic discovery through termcommunity detection. https://arxiv.org/abs/2103.12882, 2021.
- Simon Odrowski and Andreas Hamm. Analyzing parliamentary questions: A political science application of a new topic modelling approach. https://elib.dlr.de/141131/, October 2020. Contributed to SocInfo 2020.
- Andrew Kachites McCallum. Mallet: A machine learning for language toolkit. http://mallet.cs.umass.edu, 2002.
- Radim Rehurek and Petr Sojka. Gensim–python framework for vector space modelling. NLP Centre, Faculty of Informatics, Masaryk University, Brno, Czech Republic, 3(2), 2011.
- Michael Röder, Andreas Both, and Alexander Hinneburg. Exploring the space of topic coherence measures. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining - WSDM 15. ACM Press, 2015.
- Anjie Fang, Craig Macdonald, Iadh Ounis, and Philip Habel. Using word embedding to evaluate the coherence of topics from twitter data. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval - SIGIR 16. ACM Press, 2016.
- Maria A. Riolo and M. E. J. Newman. Consistency of community structure in complex networks. Physical Review E, 101(5), 2020.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057202.68/warc/CC-MAIN-20210921101319-20210921131319-00195.warc.gz
|
CC-MAIN-2021-39
| 76,849
| 232
|
http://pythonassignmenthelp64950.blogolize.com/pay-me-to-do-your-project-Can-Be-Fun-For-Anyone-12223977
|
code
|
They ought to be capable to inform you if you may get a visa in just two times and what you need for the appliance.
This passport will not contain micro-ship that have my Organic information and facts. I question my Mate to called Peruvian embassy about my condition and they say I will never have difficulty with this particular momentary passport. To summarized, I have both equally short term (expired in Feb2015) and common E-passport (expired in July2014). I want to make sure once again that this is valid. Thank you upfront for your solution
Peru doesn't offer on the net visa apps, so if you want to make an application for a visa You need to implement at the next consulate.
Is it true that as being a Portion of European union I am able to enter with out Visa and I can continue to be 90 times in Peru?
In the event you are thinking about marrying your girlfriend, it is possible to submit an application for Visa Familiar de Residente para el caso de casadao con Peruana which likewise allows you to perform in Peru.
I want to apply a visa to Peruvian consulate in Beijing. So just how long does it choose for getting visa? Exactly where can I reach fill out software for Peruvian visa.
Click on beneath to let us know you read through this informative article, and wikiHow will donate to Immediate Aid on your behalf. Thanks for helping us attain our mission of helping Everybody learn how to try and do everything.
We happen to be about the border to Ecuador various periods to renew vacationer visas (United states passport) and haven't been bothered in any way. Request one hundred eighty times and you will nearly always get it. We now have good friends that have finished exactly the same repeatedly.
I do not know just how long it's going to get to get the visa as processing moments and workload on the consulate differs. So greatest talk to within the consulate you want to apply.
If you cannot deliver one, most Airways refuse boarding the airplane. So in the event you are intending to go away Peru by way of example by bus, check Together with the airline, when they acknowledge a bus ticket or much like steer clear of disagreeable surprises in the airport.
But ahead of doing this, you ought to have a prepared affirmation of you carrier that they are going to take the bus ticket as proof that you leave the place.
I do not program to go back neither get married in the subsequent couple of years And that i'd deeply recognize if you may give me specifics of all choices for obtaining extended-term visa for Peru.
Or is the only way to get that staff visa through a concluded deal With all the Peruvian corporation just before arrival? Any other suggestions?
Speak with your parents. Use your mothers and fathers, older siblings, or other family members to be a resource should you struggle with your homework. They've all been there and been as a result of what you've best site been through, even though it had been a long time in the past. Obtaining something to listen to your "This math is so difficult!
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823657.20/warc/CC-MAIN-20181211151237-20181211172737-00323.warc.gz
|
CC-MAIN-2018-51
| 3,034
| 14
|
https://forums.adobe.com/thread/633967
|
code
|
when you rollover your button and 2nd time, nothing happens?
correct...I'm not sure why
do you see "ready" twice?
yes that what confuses me. I get the trace que to try it again and when I mouseover it dose nothing.
Do I need to clear the mouse click? add some code to the parent?
copy and paste the output panel when you use:
if you see both traces both times, you're rollover is executing. there are a variety of reasons why you might not thing it's executing.
I tried the trace on all the clips
the all gave me level0 (parent), but only once.
I did try and starting from the very begining _parent.gotoandplay(1)
but that didn't do anything (trace did work).
each movieclip sits on fame one of the parent (different layers), and starts when the sf is run.
the individul clips ahve stops at the end of there timeline.
The mouse over sits on the last frame of the last movieclip. logo
Thanks for the help so far!.....I was hoping it was gonna be something simple I was missing
that's not what would be traced.
restart you swf, click the same button twice. copy and paste the output panel display.
sorry I just simplified the asnswer. here is what I got from the trace
running your test, from the prior email
the first trace
a trace from the first mouse over
the second trace (cycle)
The button is clickable (Actionscript) in the first frame of the parent
Tried to trace them all and it did nothing
when I took out the logo, the other three traced
(the mouseover is in the logo movie clip...will this cause a problem
is your code and are your buttons on the same timeline?
is your code in a keyframe. what number frame is that?
your buttons are in, at least, one keyframe. what keyframes are they in?
are your buttons timeline-tweened?
The code for the button to be clickable and the mouseover are not in the same timeline
The click code for the object is on frame one of the parent with a script to make it clickable
The mouse over for the same object sits in a (nested timeline) with the logo mc (plays last)
both codes are in their own keyframes
The code for the click is frame one of the parent
The Code for the mouse over is on 400 of the logo mc
*all other nested timelines in the other moviclips are the same
(long timelines, but its what I got to work with =])
the button is not tweened
note:I noticed you said buttons...I,m workin with one. should I be using a separate button. One for clicking and one fro mouseover?
is on frame 1 of a movieclip. place the following on that frame 1: trace("rollover "+this);
clickTAG_btn is on what frame of the movieclip that contains it? place the following on that frame: trace("clicktag "+this);
show your clicktag_btn onPress or onRelease code. on the frame that contains that code place: trace("click "+this);
run you movieclip (once is enough) as long as all three traces execute.
This is apparentlly a bug in flash preview. I had someone else look at the actual swf, and it seemed to work. Thank you for you help. It was appreciated! (Sorry to bother you with something that was a bug in the app itself)
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156423.25/warc/CC-MAIN-20180920081624-20180920101624-00336.warc.gz
|
CC-MAIN-2018-39
| 3,053
| 45
|
https://discussion.evernote.com/profile/270570-zxl777888/
|
code
|
Always forward single quotes to full-width characters.
Example : 'demo' to ‘demo’
"demo" to “demo”
Since I often save some program scripts, such automatic conversion is very inconvenient.
I also didn't find an option that could be set to turn off this smart feature.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153897.89/warc/CC-MAIN-20210729203133-20210729233133-00446.warc.gz
|
CC-MAIN-2021-31
| 274
| 5
|
https://forums.unrealengine.com/t/about-game-templates-we-want-to-know-what-game-developers-think/252860
|
code
|
This is a questionnaire about game templates, which is the research direction of my graduation project. I hope developers can give their views on game templates. Your views will be the basis of my research direction.
Perhaps you could specify what templates are those? How can we know what you’re working on?
Let’s say I’m going to opt in and then it turns out you’re working an a FlippyBird (orwhatshisname) template - my disappointment would be immeasurable and the day ruined…
Or these are the items mentioned in #4?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301863.7/warc/CC-MAIN-20220120130236-20220120160236-00247.warc.gz
|
CC-MAIN-2022-05
| 529
| 4
|
https://sc19.supercomputing.org/proceedings/tech_paper/tech_paper_pages/pap291.html
|
code
|
Abstract: Probabilistic programming languages (PPLs) are receiving widespread attention for performing Bayesian inference in complex generative models. However, applications to science remain limited because of the impracticability of rewriting complex scientific simulators in a PPL, the computational cost of inference, and the lack of scalable implementations. To address these, we present a novel PPL framework that couples directly to existing scientific simulators through a cross-platform probabilistic execution protocol and provides Markov chain Monte Carlo (MCMC) and deep-learning-based inference compilation (IC) engines for tractable inference. To guide IC inference, we perform distributed training of a dynamic 3DCNN-LSTM architecture with a PyTorch-MPI-based framework on 1,024 32-core CPU nodes of the Cori supercomputer with a global minibatch size of 128k: achieving a performance of 450 Tflop/s through enhancements to PyTorch. We demonstrate a Large Hadron Collider (LHC) use-case with the C++ Sherpa simulator and achieve the largest-scale posterior inference in a Turing-complete PPL.
Back to Technical Papers Archive Listing
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100112.41/warc/CC-MAIN-20231129141108-20231129171108-00005.warc.gz
|
CC-MAIN-2023-50
| 1,148
| 2
|
http://goldenhelix.com/support/Bulletins/index.php?p=820
|
code
|
SVS 8.4.2 Release Notes
- Added Meta-Analysis Forest Plot. See Meta-Analysis Forest Plot for more information.
- Import VCFs and Variant Filesbug fixes:
- Fixed import of additional spreadsheets for the Mac version of SVS.
- Fixed issue with specifying affection status from a sample entity text file.
- Prevent progress dialog from popping up when a Python script prompt dialog is open if a progress loop was running before the dialog was created.
- Prevent crash when opening the Data Source Library if the source tree contains a local source that was watching a folder that was deleted while the program was closed.
- Fixed issue merging marker maps fields with a case mismatch in the name when outputting results for Variant Classification.
- Make sure that all executables have the correct permissions for Linux x64 and RHEL builds (aria2c, assistant, etc.)
- In GenomeBrowse for BAM alignment plots, remember the edited value for Filter Multi-Mapped Alignments when the option is checked and unchecked.
- Fixed Import Illumina Final Report python api to not crash when creating the spreadsheets. This will fix the crash with the add on script Illumina Text File Wrapper Script.
- Fixed issue with Annotate and Filter Variants tool when selecting full dbNSFP tracks for version 2.9 and 3.0.
- Added option to output bin Betas and Standard Errors for CMC with Regression.
- When visualizing annotation sources in GenomeBrowse set labels in the following preferred order: “Identifier” > “Ref/Alt” > “Gene Name” > “Name”
- Allow indexing of string array fields for annotation sources. This supports querying against these fields in GenomeBrowse.
- Added HTML format flags into annotation Source Editor so visualization of these fields can be improved though HTML formatting.
- Allow for coverage computation for VCF files in GenomeBrowse that do not contain a Genotype (GT) field but does have other sample level FORMAT files.
- Provide option to ignore data extents warning when converting a data file to an annotation source.
- Renamed Annotation Download Window buttons to make it clear that downloaded tracks will not be deleted through this dialog.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814101.27/warc/CC-MAIN-20180222101209-20180222121209-00491.warc.gz
|
CC-MAIN-2018-09
| 2,171
| 19
|
https://praxis.ac.in/workshop-on-kaggle-competitions-live-case-solving-session/
|
code
|
Let us solve a case to understand what Kaggle cases are & how to approach them!
What is Kaggle and how is it useful to an Analytics Aspirant?
Often candidates from other domains interviewing for analytics roles find it difficult to convince the interviews that are really passionate about data science. While they may have trained themselves on the tools and techniques, they find it difficult to establish that they have hands-on comfort. An ideal solution to this problem is participating in and solving the Kaggle competitions.
Kaggle is the world’s largest community of data scientists. It is a platform for predictive modelling and analytics competitions on which companies and researchers post their data and statisticians and data miners from all over the world compete to produce the best models.
While seasoned data scientist could compete for winning the competitions, candidates aspiring to enter the domain could start with simpler problems, practise and gain confidence. Kaggle cases are good conversation starters in an Analytics interview.
What do we plan to do in the workshop?
In this workshop we intend to solve a live competition case from Kaggle with the class. This would be an interactive session wherein the instructor would guide the participants step by step in cracking the case. This session should get you started on solving simple Kaggle cases independently.
Who should attend?
- Anyone who is keen to learn analytics or has already stepped into analytics and wishes to upscale his/her knowledge and skills is welcome to join us.
- If you are eagerly waiting for an opportunity to compete with the best data analysts, data scientists and check your level of expertise, this can be an ideal platform for you.
- If you are keen to solve Kaggle competitions but have are struggling due to lack of guidance and mentorship, you should attend this.
Pre-requisites for attending the workshop
- Prior knowledge of Excel would be necessary. Knowledge of Statistics, SAS/R would not be compulsory. We would be taking up a case that could be solved by Excel and logic!
- You need to carry your laptop to be able to participate in solution building
Some pictures from the previous meetup:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100603.33/warc/CC-MAIN-20231206194439-20231206224439-00474.warc.gz
|
CC-MAIN-2023-50
| 2,207
| 15
|
https://manpages.org/htmlmasonresolver/3
|
code
|
# make a subclass and use it
DESCRIPTIONThe resolver is responsible for translating a component path like /foo/index.html into a component. By default, Mason expects components to be stored on the filesystem, and uses the HTML::Mason::Resolver::File class to get information on these components.
The HTML::Mason::Resolver provides a virtual parent class from which all resolver implementations should inherit.
Class::ContainerThis class is used by most of the Mason object's to manage constructor parameters and has-a relationships with other objects.
See the documentation on this class for details on how to declare what parameters are valid for your subclass's constructor.
HTML::Mason::Resolver is a subclass of Class::Container so you do not need to subclass it yourself.
METHODSIf you are interested in creating a resolver subclass, you must implement the following methods.
- This method is optional. The new method included in this class is simply inherited from "Class::Container". If you need something more complicated done in your new method you will need to override it in your subclass.
- Takes three arguments: an absolute component path, a component root key, and a component root path. Returns a new HTML::Mason::ComponentSource object.
Takes two arguments: a path glob pattern, something
like ``/foo/*'' or ``/foo/*/bar'', and a component root path. Returns
a list of component paths for components which match this glob pattern.
For example, the filesystem resolver simply appends this pattern to the component root path and calls the Perl "glob()" function to find matching files on the filesystem.
Using a Resolver with HTML::Mason::ApacheHandlerIf you are creating a new resolver that you intend to use with the HTML::Mason::ApacheHandler module, then you must implement the following method as well.
- apache_request_to_comp_path ($r, @comp_root_array)
This method, given an Apache object and a list of component root pairs,
should return a component path or undef if none exists.
This method is used by the
HTML::Mason::ApacheHandler class to
translate web requests into component paths. You can omit this method
if your resolver subclass will never be used in conjunction with
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474843.87/warc/CC-MAIN-20240229134901-20240229164901-00778.warc.gz
|
CC-MAIN-2024-10
| 2,201
| 21
|
https://deepai.org/machine-learning/researcher/christopher-d-mckinnon
|
code
|
Experience Recommendation for Long Term Safe Learning-based Model Predictive Control in Changing Operating Conditions
Learning has propelled the cutting edge of performance in robotic control to new heights, allowing robots to operate with high performance in conditions that were previously unimaginable. The majority of the work, however, assumes that the unknown parts are static or slowly changing. This limits them to static or slowly changing environments. However, in the real world, a robot may experience various unknown conditions. This paper presents a method to extend an existing single mode GP-based safe learning controller to learn an increasing number of non-linear models for the robot dynamics. We show that this approach enables a robot to re-use past experience from a large number of previously visited operating conditions, and to safely adapt when a new and distinct operating condition is encountered. This allows the robot to achieve safety and high performance in an large number of operating conditions that do not have to be specified ahead of time. Our approach runs independently from the controller, imposing no additional computation time on the control loop regardless of the number of previous operating conditions considered. We demonstrate the effectiveness of our approach in experiment on a 900 kg ground robot with both physical and artificial changes to its dynamics. All of our experiments are conducted using vision for localization.
03/11/2018 ∙ by Christopher D. McKinnon, et al. ∙ 0 ∙ share
Learn Fast, Forget Slow: Safe Predictive Learning Control for Systems with Unknown, Changing Dynamics Performing Repetitive Tasks
We present a control method for improved repetitive path following for a ground vehicle that is geared towards long-term operation where the operating conditions can change over time and are initially unknown. We use weighted Bayesian Linear Regression to model the unknown actuator dynamics, and show how this simple model is more accurate in both its estimate of the mean behaviour and model uncertainty than Gaussian Process Regression and generalizes to novel operating conditions with little or no tuning. In addition, it allows us to use fast adaptation and long-term learning in one, unified framework, to adapt quickly to new operating conditions and learn repetitive model errors over time. This comes with the added benefit of lower computational cost, longer look-ahead, and easier optimization when the model is used in a robust, Model Predictive controller (MPC). In order to fully capitalize on the long prediction horizons that are possible with this new approach, we use Tube MPC to reduce predicted uncertainty growth. We demonstrate the effectiveness of our approach in experiment on a 900 kg ground robot showing results over 2.7 km of driving with both physical and artificial changes to the robot's dynamics. All of our experiments are conducted using a stereo camera for localization.
10/15/2018 ∙ by Christopher D. McKinnon, et al. ∙ 0 ∙ share
Christopher D. McKinnonis this you? claim profile
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316075.15/warc/CC-MAIN-20190821152344-20190821174344-00531.warc.gz
|
CC-MAIN-2019-35
| 3,095
| 7
|
https://attack.mitre.org/techniques/T1564/006/
|
code
|
|T1564.001||Hidden Files and Directories|
|T1564.004||NTFS File Attributes|
|T1564.005||Hidden File System|
|T1564.006||Run Virtual Instance|
|T1564.008||Email Hiding Rules|
|T1564.010||Process Argument Spoofing|
|T1564.011||Ignore Process Interrupts|
Adversaries may carry out malicious operations using a virtual instance to avoid detection. A wide variety of virtualization technologies exist that allow for the emulation of a computer or computing environment. By running malicious code inside of a virtual instance, adversaries can hide artifacts associated with their behavior from security tools that are unable to monitor activity inside the virtual instance. Additionally, depending on the virtual networking implementation (ex: bridged adapter), network traffic generated by the virtual instance can be difficult to trace back to the compromised host as the IP address and hostname might not match known values.
Adversaries may utilize native support for virtualization (ex: Hyper-V) or drop the necessary files to run a virtual instance (ex: VirtualBox binaries). After running a virtual instance, adversaries may create a shared folder between the guest and host with permissions that enable the virtual instance to interact with the host file system.
Maze operators have used VirtualBox and a Windows 7 virtual machine to run the ransomware; the virtual machine's configuration file mapped the shared network drives of the target company, presumably so Maze can encrypt files on the shared drives as well as the local machine.
Ragnar Locker has used VirtualBox and a stripped Windows XP virtual machine to run itself. The use of a shared folder specified in the configuration enables Ragnar Locker to encrypt files on the host operating system, including files on any mapped drives.
|M1042||Disable or Remove Feature or Program||
Disable Hyper-V if not necessary within a given environment.
Use application control to mitigate installation and use of unapproved virtualization software.
|ID||Data Source||Data Component||Detects|
Consider monitoring for commands and arguments that may be atypical for benign use of virtualization software. Usage of virtualization binaries or command-line arguments associated with running a silent installation may be especially suspect (ex.
Monitor for newly constructed files associated with running a virtual instance, such as binary files associated with common virtualization technologies (ex: VirtualBox, VMware, QEMU, Hyper-V).
Consider monitoring the size of virtual machines running on the system. Adversaries may create virtual images which are smaller than those of typical virtual machines. Network adapter information may also be helpful in detecting the use of virtual instances.
Monitor newly executed processes associated with running a virtual instance, such as those launched from binary files associated with common virtualization technologies (ex: VirtualBox, VMware, QEMU, Hyper-V).
Monitor for newly constructed services/daemons that may carry out malicious operations using a virtual instance to avoid detection. Consider monitoring for new Windows Service, with respect to virtualization software.
|DS0024||Windows Registry||Windows Registry Key Modification||
Monitor for changes made to Windows Registry keys and/or values that may be the result of using a virtual instance to avoid detection. For example, if virtualization software is installed by the adversary the Registry may provide detection opportunities.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100710.22/warc/CC-MAIN-20231208013411-20231208043411-00278.warc.gz
|
CC-MAIN-2023-50
| 3,487
| 22
|
http://travel.stackexchange.com/questions/tagged/syd+wifi
|
code
|
to customize your list.
more stack exchange communities
Start here for a quick overview of the site
Detailed answers to any questions you might have
Discuss the workings and policies of this site
Is there free wifi in Sydney Airport?
I'm flying into Sydney airport in about a week, and there's a chance that I'll find myself wanting to look online for a last minute hotel. However, because I will just have arrived, I won't have had a ...
Dec 3 '12 at 10:42
newest syd wifi questions feed
Hot Network Questions
Toilet flush buttons
Why is p for 8 times heads out of 21 flips not 8/21?
SSH login without password not working from rc.local
Find the largest prime whose length, sum and product is prime
How should a class be taught with uneven proficiency of students?
Appropriate word for internet name of a person
Why does this Java code output that?
set tab width for less output
Finitely many Supreme Primes?
Why is assembling paired end illumina without any input parameters an important problem?
How to cache variables in the Event System
Is any particular algebraic number known to have unbounded continued fraction coefficients?
How do I repeat the last command without using the arrow keys?
VMware FT, App HA: What happens if vCenter Server is offline?
How much red, blue and green does white light have?
How important is it to get you dog vaccinated for heartworms in India?
In what situation should I keep the Brown-out Detection feature OFF on a microcontroller?
How to correctly write number ranges?
Are the bit patterns of NaNs really hardware-dependent?
What is the idiomatic way to iterate a container while incrementing an integer index?
A word for saying things indirectly because you do not know the correct word
Find intersection points with V10
Removing a string prefix
Linear systems on bielliptic surfaces
more hot questions
Life / Arts
Culture / Recreation
TeX - LaTeX
Unix & Linux
Ask Different (Apple)
Geographic Information Systems
Science Fiction & Fantasy
Seasoned Advice (cooking)
Personal Finance & Money
English Language & Usage
Mi Yodeya (Judaism)
Cross Validated (stats)
Theoretical Computer Science
Meta Stack Exchange
Stack Overflow Careers
site design / logo © 2014 stack exchange inc; user contributions licensed under
cc by-sa 3.0
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273381.44/warc/CC-MAIN-20140728011753-00177-ip-10-146-231-18.ec2.internal.warc.gz
|
CC-MAIN-2014-23
| 2,267
| 52
|
http://missliterati.com/jupyterlab-vs-jupyter-notebook.html
|
code
|
Berkeley and Cal Poly San Luis Obispo is provided by the Alfred P. Saving snapshots would not be feasible if the project contains large amounts of data. Other geometry-related, context-free utilities should be placed there as well. To their credit, Cal Poly and my department granted my tenure in 2014, largely based on my work in developing the Jupyter Notebook. Saying hello to JupyterLab Jupyter Notebook is an open-source web application that allows users to create and share documentations that contains live code, visualizations, narrative text, and equations. Each time you execute a cell you do so in an environment that depends on your entire history and cannot be figured out by simply reading the notebook.
Contrast Excel and HyperCard, which have no invisible state: you can click and see everything. So I'm wondering: what is the best tool for scientists analyzing neuroimaging data, in my case? But JupyterLab helps transcend the limitations, while retaining the innovation and convenience. Notebook provides keyboard shortcuts also accessible and searchable from command palette. Does such an environment exist? Do I have to pick one? I avoid widgets for data exploration, which should be written from the start in a well-tested and library-focused sort of way even when it's ad hoc. The single-use token previously used to mitigate this has been removed.
I can link to Mathematica from NetLogo and other programs I use too. There are also many community-developed extensions. It brought about a revolution of sorts; now we have people blogging and writing papers in Jupyter, github is full of random useful notebooks. But regardless of how snarky you may want to be, notebooks let you intersperse rich text in format with code, in any number of languages, run it in place, and view its output, in textual, tabular or graphical format. We've paid a lot of attention to making a good keyboard shortcut system, for example. This includes Jupyter and other open source projects such as.
I wish for a environment which would have the same semantics as a script but which would snapshot the environment at the entry to each cell so that when a cell is modified execution does not have to resume at the beginning. Thanks for honest feedback although it sounds bitter - we are working hard on improving situation with release schedule indeed. Sure, the configs are editable, but the lack of proper documentation makes it incredibly difficult. . Also, table output in Zeppelin allows you apply sorting out-of-the-box. But if you really want that, it's super easy to do it with a window manager like xmonad, or even just arranging shell windows on your desktop so that you can alt-tab between them easily.
First, users love the notebook experience, and want it to improve, but without losing the core characteristics that make it the Jupyter notebook. For a further detailed information on JupyterLab beta, visit. It could use some updating. We also encourage people to theme the environment, and provide themes via plugins. A main priority after 1. All previous notebook releases are affected. There is seldom any value in looking directly at code and at a console at the same time.
To find extensions, you can search GitHub for jupyterlab-extension. I'm not sure I like where that design is going. Even when tools like this enable Emacs or vi key configurations, the integration just never quite works, and there are environment-specific options you are required to select that come from e. The most important feature of JupyterLab is real-time collaboration with several people on a single project. Single document; multiple langauges Recognizing that all this eye candy can sometimes be distracting, JupyterLab allows users to toggle between such tiled layouts and a single document view, wherein the active document takes over the entire editing region of the JupyterLab browser tab or window. You can export notebook to.
This may seem a bit confusing at first, but this offers a great flexibility and a great way to easily share results and also to reproduce them because you save both your code, your notes and your results in the notebook file, they are all kept together so that someone opening your notebook can see what they should expect to get as a result of executing each cell. It is worth mentioning that formerly SageMathCloud now also offers a custom notebook front end for Jupyter that supports real-time collaboration. We encourage users to start trying JupyterLab in preparation for a future transition. I think this persistent state is one of the main advantages of the notebook environment, or the Matlab workspace, which I guess it was inspired by. Jupyter notebook is best used for Data science related tasks in various languages. We have merged more than 300 pull requests since 4. However, if you are designing notebook usage for a big amount of users in an enterprise, take a look on Zeppelin — it will not take long for it to overtake Jupyter with temps that it is developed now.
To process this data, I have a pipeline which post-processes some of the climate model, which I then analyze with a combination of shell or Python scripts if I'm saving, say, a recipe for a figure or analysis to reproduce in batch later or a Jupyter Notebook if I'm interactively hacking on an idea or analysis. My experience is in line with yours, debugging loops and functions is a big pain point. Then they become reliant on this as a crutch and complain when it's no longer there, instead of learning proper ways to write tests and proper debugger usage and let those things automate the problem of zooming in on outlier data, messy data, or bugs. You often want to quickly spawn and kill shell tabs, which themselves may or may not be in the same language. Its basically got less to do with a browser, more to do with being a highly portable data analysis platform. Have a question about this project? I could export the code out of notebook to a regular.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578531462.21/warc/CC-MAIN-20190421120136-20190421142136-00350.warc.gz
|
CC-MAIN-2019-18
| 5,977
| 8
|
http://opensourceautomation.com/phpBB3/viewtopic.php?f=8&t=1014&start=10
|
code
|
Thanks I have followed your input with the picture help of kherron and got it up working.Automate wrote:For the Mobile Web setup.
Go to the "Objects" web page and add some new "Place" Type objects. Then edit some of your objects and set the "Container" to the "Place" objects you created.
For the "Status" page see this post viewtopic.php?p=8243#p8243
for an example of the "Custom Property List"
As for the status page i will look into it next after i master the HOME
Again thankskherron wrote:Have you tried to view the mobile site on your computer? http://localhost:8081/mobile/index.aspx
Also, you have to create Places in your Objects page, like Living room, Kitchen, MasterBedroom. Then put your devices in these places by setting the CONTAINER field to the places you created. For example, if you have a light in the Master Bedroom, go to that device on the objects page, and set the CONTAINER to MasterBedroom.
Yes i can reach Mobile site from Tablet, Cell or Laptop no problems.
As for your setup i have created a 2 Place, Living Room and Computer and is does work.
Same for me if i use LTE/4G but if i use WiFi no problems.kherron wrote:This works fine from my server.
However, if I try with my phone, the load bar never finishes, and I never see my places??
I get the 3 tabs at the bottom, but that's it. I'm also trying from my local LAN, so no firewall issues?
Yes i get the Uri Method Description with all the stuff.Automate wrote:Does the REST service work from other computers on your LAN? http://myOSAServer:8732/api/help
Yep it work on the Cell browser and APP on Wifi is ok too.fiveHellions wrote:Also does it work from the browser on your phone? http://<OSA Server Address or Name>:8081/mobile/index.aspx
If it does work from browser but not android app then check under the settings in the android app and make sure the 'Server Address' and 'Server HTTP Port' are set correctly. Also make sure the 'Default HTTP page' is set to mobile/index.aspx
As you can see I have gone a little bit further tonight
Pictures were taken form the Laptop but i have the same result from the Android Apps on the Cell and Tablet.
My living room as only one LAMP module but it work. Next the computer i have 3 and the Dummy one is pointing to unused IP just to test the State. And again it work as it should.
So thank you all for your great help all you reply and inputs.
Really appreciate and i will keep going forward the get the thing working.
Enough for tonight...
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655911896.73/warc/CC-MAIN-20200710175432-20200710205432-00472.warc.gz
|
CC-MAIN-2020-29
| 2,469
| 21
|
https://community.volumio.org/t/plugin-installation-snapcast/43141
|
code
|
I am new to the forum, and also to Raspberry.
I am looking to create a multiroom sound system. I have more than one Raspberry and I want the sound to sync. I have heard of Snapcast. But I cannot install it I cannot find the commands. If anyone can give them to me. Thank you
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368687.25/warc/CC-MAIN-20210304082345-20210304112345-00282.warc.gz
|
CC-MAIN-2021-10
| 274
| 2
|
https://programmer.ink/topic/cmake
|
code
|
First of all, I would like to thank the netizen named "Ren Qilin" for sorting out the PDF. I have a heart. I forgot where to download it, but it's really complete.
But I also have a heart. After all, the PDF is old and long, and my secondary development is also hard.
Separation of compilation and source code
The interm ...
Posted by Helios on Wed, 22 Sep 2021 16:43:36 +0200
Author: Lei Xia
The test team leader of akerson focuses on MySQL related test work.
Source: original contribution
*Aikesheng is produced by the open source community. The original content cannot be used without authorization. Please contact the editor for reprint and indicate the source.
What is Mysql Test?
Mysql Test is an integrated al ...
Posted by jawaking00 on Tue, 31 Mar 2020 13:59:32 +0200
Create and enter the catkin folder
wilson@ubuntu:~/code$ mkdir catkin
wilson@ubuntu:~/code$ cd catkin/
Create and enter src folder
wilson@ubuntu:~/code/catkin$ mkdir src
wilson@ubuntu:~/code/catkin$ cd src/
Initializing the workspace in the ...
Posted by Joefunkx on Wed, 22 Jan 2020 16:50:45 +0100
SPI decoding instance analysis of DSview
How to compile a file if we change it?
Compile and install our modified files
sudo make install
Run. It is recommended to enter DSview on the command line to run. ...
Posted by ramez_sever on Mon, 13 Jan 2020 10:13:53 +0100
compiled 1.5.9 before Versions libjpeg-turbo , now upgrade to 2.0.0 and compile based on CMake.
Still, according to the official website, libjpeg turbo is 2-6 times faster than libjpeg, thanks to its highly optimized Huffman algorithm. In many cases, the performance of libjpeg turbo is comparable t ...
Posted by AffApprentice on Fri, 03 Jan 2020 15:57:59 +0100
Using the tree command under linux can easily view the file tree structure under the specified directory, but some systems do not install the command, so you need to install it manually. Take the installation of Ubuntu as an example, and other linux systems are similar.
To install under ubuntu:
With the net ...
Posted by bedted on Tue, 31 Dec 2019 06:39:21 +0100
Introduction to MHA:
At present, MySQL is a relatively mature solution for high availability. It was developed by youshimaton, a Japanese DeNA company (now working for Facebook), and is an excellent set of high availability software for failover and master-slave promotion in MySQL high availability environment.During the MSQL ...
Posted by DaRkZeAlOt on Wed, 18 Dec 2019 23:27:33 +0100
lnmp environment construction
Operating system installation: CentOS 6.8 64 bit minimum installation.
Configure IP, DNS, gateway and host name
Configure firewall and open ports 80 and 3306
Close access wall
service iptables stop
/etc/init.d/iptables restart? Restart the firewall to make the configuration ef ...
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057882.56/warc/CC-MAIN-20210926144658-20210926174658-00112.warc.gz
|
CC-MAIN-2021-39
| 2,769
| 43
|
https://miamiedtech.com/courses/project-based-learning-best-practices/
|
code
|
for teachers. by teachers.
Project Based Learning
How can teachers incorporate Project Based Learning in ways that encourage both deep student learning and participation of students from historically underrepresented groups. In this course, we'll explore some of the fundamental components of Project Based Learning.
An analysis of PBL through the lens of equity, SEL, and Culturally Relevant Pedagogy will be done.
Build 21st century success skills such as critical thinking, problem solving, communication & collaboration.
Help students become aware of their own academic, personal and social development.
Each milestone must have a clear objective, so that important parts of the project are delivered to each milestone.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817780.88/warc/CC-MAIN-20240421132819-20240421162819-00184.warc.gz
|
CC-MAIN-2024-18
| 723
| 7
|
https://davegiles.blogspot.com/2016/06/choosing-between-logit-and-probit-models.html
|
code
|
I've had quite a bit say about Logit and Probit models, and the Linear Probability Model (LPM), in various posts in recent years. (For instance, see here.) I'm not going to bore you by going over old ground again.
However, an important question came up recently in the comments section of one of those posts. Essentially, the question was, "How can I choose between the Logit and Probit models in practice?"
I responded to that question by referring to a study by Chen and Tsurumi (2010), and I think it's worth elaborating on that response here, rather than leaving the answer buried in the comments of an old post.
So, let's take a look.
Putting the LPM entirely to one side (where,as far as I'm concerned, it rightly belongs!), the issue is whether a standard normal distribution, or a logistic distribution, is the better choice when it comes to modelling the link between our discrete dependent variable and the regressors (covariates). If we choose the normal distribution we end up with the so-called Probit model; and if we choose the logistic distribution we end up with the Logistic model.
Let's begin by asking, "how much are the results likely to differ when we make one of these choices or the other?"
The short answer is, "not very much, in general." So, this may seem to suggest that we can basically flip a coin when it comes to deciding whether to go the Logit route or the Probit route. However, it's not quite that simple.
First, the answer given above relates to the simple case where we have a binomial Logit or Probit model. That is, there are only two discrete choices for our qualitative variable. As soon as we move to the multinomial case, where there are three or more choices, the story changes fundamentally. In particular, the multinomial Logit model is computationally simpler to implement than is the multinomial Probit model, and this may factor into our choice. On the other hand, there is the well-known problem associated with the "Independence of Irrelevant Alternatives" that arises with the multinomial Logit model, but not with the multinomial Probit model. So there are pros and cons when it comes to making this choice in the multinomial case.
Second, even when we restrict ourselves to the standard binomial (zero-one) case, there can be some marked differences between Logit and Probit results when we focus on the tails of the underlying distributions (e.g., Cox, 1966).
So, it's still interesting to think about whether we can come up with some formal statistical procedure to help us to decide between the Logit and Probit models, when we have the same (limited) dependent variable.
These two models are "non-nested", so a natural way to proceed is to use some information criterion or other to discriminate between them. This applies whether we're talking about a binomial model or a multinomial model. Note that this is not an example of hypothesis testing. Rather, we're effectively "ranking" the Probit and Logit models. (For some general comments about the use of information criteria in other contexts, see my earlier posts here and here.)
One of the few studies to evaluate the effectiveness of alternative information criteria to discriminate between Logit and Probit models is that by Chen and Tsurumi (2010). They consider five different criteria, namely:
- The deviance information criterion (DIC).
- The predictive deviance information criterion (PDIC).
- The unweighted sum of squared errors (USSE).
- The weighted sum of squared errors (WSSE).
- Akaike's information criterion (AIC).
The main conclusions emerging from the Chen-Tsurumi paper are as follows, and they aren't all that encouraging:
- If the binary data that are being modelled are "balanced" (i.e., there is roughly a 50-50 split between the zero and one values), then none of the above information criteria are very effective at discriminating properly between the Logit and Probit models.
- If the data are "unbalanced", then only the DIC and AIC criteria are effective.
- The more information that is available about the higher moments of the underlying distribution of the binary data, the more effective are these criteria in the "unbalanced" case.
- Sample sizes of at least 1,000 or more are needed to be able to discriminate between the Logit and Probit models using this approach.
If these information criteria don't help us very much, is there some other way to choose between the Logit and Probit specifications?
Another option is to think of using a classical hypothesis test. As I noted above, the two models are non-nested, and this has to be taken into account. This approach is followed, for example, by Chambers and Cox (1967). First, they take the Logistic specification as the null hypothesis, and seek a power-maximizing test against the Probit alternative. Then they construct a test with the null and alternative hypotheses reversed.
Once again, the author's simulation experiments are not particularly encouraging, and relatively large sample sizes are needed for the tests to have appreciable power.
In summary, if you're really concerned about discriminating/selecting between the Logit and Probit models, then there are some tools that are available, but they are only modestly effective.
There's certainly some room for more research into this topic.
Chambers, E. A. and D. R. Cox, 1967. Discrimination between alternative binary response models. Biometrika, 54, 573–578.
Chen, G. and H. Tsurumi, 2010. Probit and logit model selection. Communications in Statistics - Theory and Methods, 40, 159-175.
Cox, D. R., 1966. Some procedures connected with the logistic qualitative response curve. In Research Papers in Statistics: Festschrift for J. Neyman (F. N. David, ed.), Wiley, London.
Post a Comment
Note: Only a member of this blog may post a comment.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652959.43/warc/CC-MAIN-20230606150510-20230606180510-00689.warc.gz
|
CC-MAIN-2023-23
| 5,807
| 32
|
https://support.quest.com/technical-documents/metalogix-email-migrator/4.7.1/help/8
|
code
|
This section will guide you through specific type of migration where source or target is a flat file folder. Once your source and target connections have been successfully configured, you can start the migration, the general process being very simple:
1. right-click the source node/mailbox(es) >>
2. select Copy ... >>
3. right-click the target >>
4. select Paste ...
"Folder" connection can read MSG, PST and also EML files.
Our Email Migrator allows you to migrate items from supported archive systems to ordinary folder . After migration the target folder copies the structure of the source mailbox and contains the migrated msg/PST files.
A. Migrating from archive
You have copied the archive system mailbox(es) and selected Paste command on the desired folder in Email Migrator console. Now the configuration dialog appears. It features three tabs with the following configuration options:
Create missing mailbox - check to create subfolder with the migrated mailbox name; this subfolder will be the root folder of the migration. If this option is not checked and such subfolder does not exist in the target folder, nothing will be migrated
Verify items migration (slow) - visible only in advanced mode (Settings/Enable Advanced Mode) - when selected the function checks if the subject and body of the migrated item are the same on source and target; However, it slows down the migration. The verification can run also after migration as a standalone process. To run it after migration, use the context menu. To do so:
1. Right-click the source mailbox and from the context menu select Compare Mailbox option
2. Right-click the target mailbox and from the context menu select Compare with Mailbox option
3. The progress dialog appears and you can check the log
Delete original shortcuts - when checked, source system shortcuts will be deleted from the user mailbox
Here you can configure notification emails that inform the defined users when the migration task is finished.
Send to - email account or group that the email should be sent to
Send from - email account that the email should be sent from
CC - email account(s) or group(s) that should be CC'd
BCC - email account(s) or group(s) that should be BCC'd
Subject - subject line of the email
Server (smtp) - network name or IP address of the server that the email will be sent from
User & Password - user account credentials to connect to the specified server
This tab contains these subtabs:
Folders - use this tab to define folders which should be excluded from migration. It may not be desired to migrate certain types of folders, e.g. Deleted items, Trash, Spam etc. Click the Add button to specify such folders. In the pop-up dialog enter the FULL PATH to the desired folder preceded by forward slash " / ". Be aware that only the specified folder will be excluded, folders with the same name but occurring elsewhere in the folder structure will be migrated. For instance, if you specify folder path "/Private", it will be excluded from the migration. However, any other folder with identical name but in different location (e.g. Inbox/Private) will be migrated.
Item Classes - applicable only for some connections - two boxes on this tab allows you to fine-tune the migration in respect to Exchange defined message classes (e.g. Calendar, Contacts, Post, various shortcuts etc.). By default only shortcuts of supported archiving systems are excluded from migration. To configure your desired settings, add/remove items in the Included Item Classes and Exclude Item Classes boxes.
Example: To migrate calendar items enter IPM.Appointment*
To migrate contacts items enter IPM.Contact*
To migrate journal items enter IPM.Activity*
Shortcuts for the following archive systems can be migrated:
oPamMessage - shortcuts created by Metalogix Archive Manager
oEnterpriseVault - shortcuts created by Symantec Enterprise Vault
oExShortcut - shortcuts created by EMC EmailXtender
oMimosaStub - shortcuts created by Mimosa NearPoint
oPERSISTMailItem - shortcuts created by HP IAP
oEAS - shortcuts created by Zantaz EAS
oAfterMail - shortcuts created by Quest Archive Manager
oBMA.Stub - shortcuts created by Barracuda Message Archiver
oTJ.KAI.Archive - shortcuts created by iQ.Suite Store
Other - time and size filtering options are activated by checking the respective check-boxes and configuring required settings
Once your desired settings are defined, click Run to start the migration.
After clicking the Save button a migration job with the specified settings will be created and saved in the Job list. However, it will not run immediately. You can run it manually later or create a PowerShell script for it. For more information, see the Job List section.
As the migration job runs, job log is being created as visible in the job list (opened from View / Show Job List). Open the detailed log e.g. by double-clicking the specific job. You will be able to view the logs for the action, even while it is still running. By clicking the Target link the target destination with newly migrated items is opened. Job can also be paused. When restarted again, already processed items are skipped and the job starts at the point where it was stopped.
Items marked with green check-mark were processed successfully. If any issue occurs (red mark), double-click it to see further details.
Finally verify the migration results in your target system by opening specific email(s) and checking that all data were migrated well (email body, attachments etc.).
Email Migrator can significantly improve the time it takes to complete migration jobs by leveraging a distributed migration model. This model essentially comprises a Email Migrator Controller (Host), a central jobs repository or queue, and one or more loosely-coupled Email Migrator agents. By automatically selecting the jobs from a central repository, Email Migrator agents are able to distribute the workload efficiently across the resource pool. The distributed model enables parallel processing of migration jobs that reduces migration time, and enables higher utilization, better workload throughput and higher end-user productivity from deployed resources.
Distributed Migration is typically used for large migration projects and relies on four main components as follows:
An SQL Server database that contains migration metadata.
Controller (or Host)
This is the primary Email Migrator Console that manages agents, the agent database and the migration jobs.
A SQL Server database that contains the repository or queue of migration job definitions which the agents can execute
This is a physical or virtual machine that is remote from the Controller machine. It is connected to through the Controller to run jobs remotely. Once connected, the Controller will push an installation of the Email Migrator onto the Agent machine, which is then configured to execute the migration jobs that are sent from the Controller. Any logging information would then be sent to the Agent Database. When an agent is executing a migration job, any interaction with the agent such as changing a configuration setting is not recommended.
Email Migrator can facilitate large migration efforts by distributing workloads across multiple machines (called "Agents") in a method referred to as "Distributed Migration".
A "Controller" machine can be configured to distribute jobs to agents manually, or through the Email Migrator Distributed Migration Wizard which is designed to streamline this setup and configuration.
When using the Distributed Migration feature, a "Controller" machine can be configured to distribute migration jobs to Agent machines in a hub-and-spoke model to help run migration jobs in parallel, to help maximize overall performance. There is the Email Migrator Distributed Migration Wizard that is designed to help streamline and configure the setup for both the Controller and Agent machines.
To start the Distributed Migration Wizard, click the Configure Distributed Migration button located in the'Connection tab of the Email Migrator ribbon.
The Wizard contains eight steps, each of which are described below.
1.The Wizard begins the configuration process with a Getting Started section. Review the information presented on this screen before proceeding to the next step.
2.Agent machines use a SQL database to share information with each other. This database must reside in a location that all Agent machines, as well as the Controller, are able to access. Please see the Agent Databases page for more details.
Enter the address of the SQL server where the database is located, or use the Browse button by the SQL Server field to browse for servers on the local system and on the network.
3.Now enter the name of an existing SQL database in the SQL Database field, or click Browse to view the databases located on the chosen SQL server. To create a new database, select the New Database button in the Browse window, and enter a name for it.
4.If specific authentication credentials are required in order to log in to the SQL server, enter them in the appropriate fields. It is recommended that SQL Server Authentication is used to connect to the Agent Database.
5. The Wizard will now enable you to copy application, environment, and user mapping settings files from the Controller machine to the Agent database to be used in place of the default settings on the Agent machines. Please see the Updating Agents section for more details.
Select Copy Settings to copy UserSettings.xml, EnvironmentSettings.xml, and ApplicationSettings.xml from the local system to the Agent database. A confirmation will appear after the settings have been successfully copied.
Note that after these settings have been copied to the Agent database and the console has been restarted, Email Migrator will no longer look in local settings files for configuration information. This means that changing local settings files will have no effect on the console's operations.
6.To make changes to settings after copying them over to the Agent database, changes need to be done directly in the Agent database.
7.Email Migrator utilizes security certificates in distributed migrations to maintain secure communication between the Controller and Agent machines. Please see the Installing a Certificate for use with Remote Agents page for more details.
Select an existing certificate to use, or create a new one through the Generate New Certificate button. When creating a new certificate, make sure to use a certificate name that does not contain any spaces. Also use a robust password that you will remember, and export the certificate to a folder where it can be found again in the future.
This certificate will automatically be used for the Controller machine where the Wizard is currently running from, whether or not it was generated now or at a previous time.
8.The Email Migrator Installation package is needed during the setup of distributed migration to ensure that all the Agents are configured properly. Select the Download Installer button to download a new copy of the installer to that system. Please see the Requirements, Configuration and Installation for Distributed Migration page for more details.
An indicator will appear at the bottom of the Wizard indicating that the download is in progress, and the Next > button will become available when the operation is complete.
9.Agents are configured and deployed at this stage. It is recommended that the Agent should be in the same network as the Controller. Enter the name of the Agent computer in the Agent Name field to find it by name, or select Browse... to browse for it in the network. The IP address of the system can also be used to locate and connect to it.
Make sure to enter the correct user name and password for the system before proceeding.
10.Email Migrator will perform a check of the specified Agent system to ensure that it meets with the required prerequisites.
If the Wizard finds that any of the services are not running, click the Enable Missing Services button to have the Wizard attempt to enable those services remotely.
Windows Server 2008 R2 or later is the recommended minimum Operating System to be used for Agent machines in Distributed Migration. If the Agent does not meet this requirement, the Wizard will throw a warning message, but it will allow you to proceed so long as the above services are available.
11.Click Next > when all services are enabled and you are ready to proceed. Then click Deploy Certificate to deploy the certificate configured above to the Agent system.
A confirmation message will appear confirming successful deployment.
12.Now select the Deploy Application button to begin the deployment procedure. An indicator will appear at the bottom of the Wizard indicating that the deployment is in progress.
13.The Summary windows shows the configuration options that have been selected for deployment, and the status of the deployment to the specified Agent(s).
To add additional Agents to the distributed migration, select the Deploy New button. The Wizard will then return to the Configure Agents screen (shown above) where the last step can be repeated for the additional Agent.
14.Click Finish at the bottom of the Summary screen to close the Distributed Migration Wizard. Then close the console and start it up again to make use of the new settings.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361849.27/warc/CC-MAIN-20210301030155-20210301060155-00523.warc.gz
|
CC-MAIN-2021-10
| 13,357
| 82
|
http://getenvy.com/archives/envywiki/QuickGuide.Library.html
|
code
|
Using the Library to organize the downloaded files
The Library is one of the most important tools Envy offers, since it will help you organize the files you are sharing, and gives easy access to play them in the Envy Media Player. This section will focus on the basics of the Library's features.
For detailed information on the Library, refer to the Library Manager page.
Looking for the files that you just downloaded:
- When you click on the Library tab, you are going to see a list of "virtual folders". But currently what we are looking for is the panel that is below the main page, it has listed there our most recent downloads.
- When you click on them, it will take you to the location of where that file it, and highlight it on the main list.
- You can then double click that file if you want to play it on the media player, or use the Library toolbar (below) or the context menu (right click) for more functionality.
- When selecting a file, the panel where the recent files appeared will be changed to show detailed information about the file, and also gives you the ability to rate it, so that other users can know how good it is, by using the "Rate this file" link on the top right corner of the details panel.
- To get back to the list of recent files, you can click on the blank part of the folder structure shown on the left, that will take you to the main place of the Library again.
How the Library works
Envy features two different ways to organize the files in the Library: Physical, which works with files in real shared folders, as they appear in your hard drive's file system, and the Organizer, which allows you to organize files into virtual folders, without actually moving them about the hard drive's file system. Switch between Physical and Organizer mode by using the two buttons that are at the top of the folder structure panel on the left. Each one has its advantage, and you might find yourself using features of both in the future. Note: The Library only contains the files that you already downloaded, or added.
The Organizer is a Library that works with Virtual folders, where the files are organized by their metadata (detailed) information. For the purpose of this, let's say that you are looking for the classical music that you downloaded.
- Go to the folder structure on the left, and where it says My Music, go to Genre, and double click it to expand it.
- Under Genre are shown the genres of the files available on the Library, now since we were looking for the Classical music, you can click on the Classical genre.
- At the right appear all the songs that are classified as Classical, by using their metadata. You can now play them if you like, by double clicking, or using the context menu.
- It could also be the case that we wanted to add and play all the Classical music to the Media Player. So you can now click on the Play Album button located on the Library toolbar (bottom), or use the context menu by right clicking on Classical.
- This will pop up the Media Player with all the songs added to it.
- Now switch back to the Library to continue with the tutorial, by using the Library tab button on the toolbar.
- to be continued...
The Physical Library is where the files are located on your computer. They will appear just like folders, shortcuts or files would anywhere else on your computer, such as your desktop for example. Let's say you have a folder full of music files that you wanted to share with other members of the Envy community.
- Go to the library and click on the Physical button.
- Right click in the directory tree and select "Share Files".
- Click on "Add"
- Navigate through the directories until you find your folder you want to share. When you find it, highlight it and press ok. Press ok again in the next box.
- Envy will begin a process called hashing, and enable you to share the files inside the folder to other people of the p2p community.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694908.82/warc/CC-MAIN-20200127051112-20200127081112-00093.warc.gz
|
CC-MAIN-2020-05
| 3,920
| 25
|
http://flamerobin.org/releases/0.7.2.html
|
code
|
Windows (setup & zip), Linux (gtk1 & gtk2), Mac OS X and source packages are available for download. Enjoy.
- Fixed crash when opening role's property page with Firebird 1.x
- Fixed DDL extraction for unique constraints
- Fixed updating of privileges page for procedures
- Removed dependency on Firebird 2.0 for Gtk2 package
- this is a bugfix release
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812584.40/warc/CC-MAIN-20180219111908-20180219131908-00782.warc.gz
|
CC-MAIN-2018-09
| 351
| 6
|
https://askjavascriptquestions.com/2021/04/how-can-i-run-create-react-app-from-a-dynamic-index-page-in-development-mode/
|
code
|
I want to use create-react-app in my new project. Specifically I want to use it with in conjunction with a Laravel blade view (php index file).
From my research so far it seems that development mode, when you run
npm start will only work with a static html file instead of my dynamic index file and being run from http://localhost:3000/.
Does anyone know of a way of getting around this so i can run my app on a different url and that points to a server side rendered index page and still enjoy hot reloading.
Thanks for the help.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991562.85/warc/CC-MAIN-20210519012635-20210519042635-00485.warc.gz
|
CC-MAIN-2021-21
| 530
| 5
|
http://security.stackexchange.com/questions/tagged/file-upload+antimalware
|
code
|
Information Security Meta
to customize your list.
more stack exchange communities
Start here for a quick overview of the site
Detailed answers to any questions you might have
Discuss the workings and policies of this site
Use PHP to check uploaded image file for malware?
I want my users to be able to upload a photo. Currently I am not checking the uploaded photo for problems of any kind, although I do limit the size to 32k. Is there any way for me to check uploaded ...
Jan 8 '13 at 15:05
newest file-upload antimalware questions feed
Hot Network Questions
Is it safe to download and burn a CD on an infected PC?
Proper translation of John 1:1
About the phrase " pick someone brain"
Are 10x10 matrices spanned by powers of a single matrix?
I have and will
In the 1982 movie, "The Thing", why do they have flame-throwers?
Word Search Puzzle
Whose underwear is this?
Measuring non-commuting observable at once
How do I remove spaces from shell variables?
Is there such a thing as the Fundamental Theorem of ZFC?
What is a polite way of talking about a recently-deceased person?
Calling model's method from view works, but not from controller
Is it popular to request TA presence in the classroom for all lectures & is it right?
Be Published From/By
On definition of gamma function.
Is this slower because of two lookups instead of one?
Do Text Based Browsers reduce Network Traffic
Triangles to Quads Problem
Optimize query containing too many inner join and rows
My employer wants me to write a guide for doing my job
more hot questions
Life / Arts
Culture / Recreation
TeX - LaTeX
Unix & Linux
Ask Different (Apple)
Geographic Information Systems
Science Fiction & Fantasy
Seasoned Advice (cooking)
Personal Finance & Money
English Language & Usage
Mi Yodeya (Judaism)
Cross Validated (stats)
Theoretical Computer Science
Meta Stack Exchange
Stack Overflow Careers
site design / logo © 2014 stack exchange inc; user contributions licensed under
cc by-sa 3.0
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133568.71/warc/CC-MAIN-20140914011213-00139-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
|
CC-MAIN-2014-41
| 1,962
| 50
|
https://www.justanswer.com/employment-law/9q0x4-employment-positions-require-people-least-18-years.html
|
code
|
Employment Law Questions? Ask an Employment Lawyer.
I'm not sure that I agree that asking for a license in the interview process is legal.
I really do not know how to convince you otherwise. You asked for an expert reply, and I provided it.
It is perfectly legal to ask for ID as part of the interview process to ensure identity and majority of age, and this applies not only after someone is hired but throughout the process. There is no state or federal law that disallows this.
No one else will tell you otherwise - I guarantee.
Regardless, good luck in your hiring process.
Gentle Reminder: Please, use REPLY or SEND button to keep chatting, or RATE POSITIVELY and SUBMIT your rating when we are finished. You may always ask follow ups at no charge after rating.
Handing someone your license during the interview process would clearly indicate your age. An employer can't directly ask about age during the interview; asking for a license would elicit the same information that we can't directly ask for.
We don't need to talk anymore, but, you're right - I'm not convinced.
You are likely thinking of asking about age discrimination. See here:
This applies to people 40 or older. Since you are asking to confirm if someone is 18 or older, asking for ID to confirm age is perfectly legal.
All the best.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105451.99/warc/CC-MAIN-20170819124333-20170819144333-00391.warc.gz
|
CC-MAIN-2017-34
| 1,305
| 12
|
http://www.stiivi.com/projects.html
|
code
|
Follow me on Github:
Data Brewery is a collection of open-source data warehouse libraries and tools.
Lightweight distributed online analytical processing (OLAP) framework and server. Used for analytical modelling and aggregated browsing. Serves as a foundation for reporting and analytical applications.
Experimental data integration framework based on metadata composition of abstract data objects. Keeps the data in their natural
Small utility library for embedding arithmetic expressions parser and compiler into other libraries and applications.
Links: Github (contains documentation)
Note: The following projects are not maintained any more, they are listed here for software archaeologists to not to be forgotten.
StepTalk (ObjectiveC) – Smalltalk interpreter and scripting framework that allows scripting of ObjectiveC objects using the Smalltalk language. Official GNUstep scripting framework. Used to work on early Cocoa
AgentFarms (ObjectiveC) – Simlation framework with support for multi-agent based simulations. Used portable distributed objects for running simlator remotely and observing the state through a “Farmer” graphical application.
Develpment Kit (ObjectiveC) – DevelKit Framework is an GNUstep/early Cocoa framework with tools for reading, understanding and generating source code from other applications
XY (ObjectiveC) – Simple graph drawing framework for GNUstep/early Cocoa, scriptable with StepTalk
Various contributions to the GNUstep project.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826715.45/warc/CC-MAIN-20181215035757-20181215061757-00063.warc.gz
|
CC-MAIN-2018-51
| 1,484
| 12
|
https://community.tyk.io/t/tyk-gateway-dashboard-run-at-port-80-and-custom-domain/2666
|
code
|
I’m using this script: https://gist.github.com/lonelycode/4f645c4733faaa74d8fd to install Tyk-gateway and Tyk-dashboard.
It’s running all fine, but I need something as this specific:
I need to run
- API-Gateway at domain. net (root domain), port 80
- API-dashboard at developer.domain. net, port 80 (so that my developers will no need to put :3000 to access my page)
As I can see the source code of the gist file above, it’s using the same docker command as your docker page, but I’m wondering if I can set the port to port 80, and set the API-gateway domain as well as dashboard port and domain name?
I’m running Tyk On-premise on OVH server, Centos7. All Mongodb, Redis, Tyk-gateway, tyk-dashboard, docker is in the latest version.
Thank you much!
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141460.64/warc/CC-MAIN-20200217000519-20200217030519-00474.warc.gz
|
CC-MAIN-2020-10
| 759
| 8
|
https://www.computerworlduk.com/it-business/theres-no-fud-like-an-old-fud-3569031/
|
code
|
The Economist has been writing poorly-informed articles about open source for years – I dissected a particularly egregious example back in 2006. So it's hard to tell whether the flaws in this new book review are down to that antipathy, or whether they are inherent in the title it discusses, "The Comingled Code". As far as the latter is concerned, the following information does not inspire confidence:
its academic tone makes it unlikely that the book will fly off the shelves, even in areas with a lot of hackers (who are sure to take offence at the fact that the authors took money for research from Microsoft, long the arch- enemy of the open-source movement— although they assure readers that the funds came with no strings attached).
Since I've not read the book – and I'd rather not shell out £25.95 for the dubious pleasure of discovering where the errors originate – I'll limit myself to addressing the arguments outlined in the Economist review rather than worrying about where they originated.
First, we have this:
Open-source developers, for instance, are widely believed mostly to be volunteers who just love writing code. While this may have been true in the early days of computing, the motivation and background of programmers is now much more mixed. Many work for firms that develop both open-source and proprietary programs and combine them in all kinds of business models. Nearly 40% of companies surveyed fall into this category.
Well, no, actually: this has been clear for years. Anyone who still believes that this stuff is being written in bedrooms has clearly not been paying attention.
This also comes across as pretty meh-worthy:
The survey also indicates that the two software worlds are much more "comingled" than their respective champions would have it. More than a quarter of companies happily mix and match both sorts, in particular in poorer countries.
So you mean people don't rip out every piece of proprietary software in their business to replace it with entirely open source, but tend to use combinations of both? Well, whodathunk? Actually, more seriously, I think that figure of "more than a quarter" seriously underestimates the practice, which I imagine is much closer to 100% these days. If this is a self-reported figure it's probably more an indication that many CTOs don't actually know what's going on in their IT departments (which has been true for very many years where open source is concerned.)
But my main concern here is with the follow section:
Yet the finding that open-source advocates will like least is that free programs are not always cheaper. To be sure, the upfront cost of proprietary software is higher (although open-source programs are not always free). But companies that use such programs spend more on such things as learning to use them and making them work with other software.
Yes, it's a variant on that old FUD that free software is not actually free (gosh, really?) that Microsoft tried about ten years ago and gave up when it realised that nobody said it was when you took into account all the factors like paying wages. But leaving aside that this, too, is hardly news to anyone, let's just look at the central claim of the current incarnation of that FUD:
companies that use such programs spend more on such things as learning to use them and making them work with other software
So does the first part mean that learning to use a new piece of open source software is inherently harder than learning to use a new piece of proprietary software? I've not seen a single piece of research that suggests that. What I have seen documented is that people who are currently using Microsoft Office, say, find it harder to learn to use OpenOffice, say, than to continue using Microsoft Office. Which is, of course, a piece of wisdom that is once again firmly located at the very heart of the Land of the Bleedin' Obvious.
So, passing swiftly on in the hope that there might be a more substantive issue here, we have the second claim: that companies spend more on making open source work with "other software". But wait, what could that "other software" refer to? Since it's not open source (because it's "other", not open source) it is clearly proprietary; so the problem comes down to making open source work with proprietary software. And why might that be?
Well, it could be because closed software is, by definition, closed, with manufacturers that are generally unhelpful when it comes to providing information that might help others to work with their products (because they want to keep their super-duper "secrets", er, secret). In other words, the problem lies not with open source but with the closed source software, which makes it unnecessarily hard to bring in applications from other vendors (whether using open source or not).
Indeed, it is this problem – the difficulty of using other, possibly open software with proprietary applications – that is one of the key reasons why companies want to move to open source, to avoid this kind of lock-in by breaking the vicious circle. But to say that open source is more expensive because of the problems caused by the proprietary software it is trying to replace is rich indeed. The correct statement would be that proprietary software has hidden costs that manifest themselves when companies try to use new software, for example open source. In other words, it's the other way around.
To be fair, the Economist article does mention the solution to this problem:
governments should make sure that the two forms of software compete on a level playing field and can comingle efficiently. One way of doing this would be to promote open standards to ensure that proprietary incumbents do not abuse a dominant position.
What it – or the book – fails to note is that this is precisely what proprietary software companies have been fighting desperately to prevent. The battle over the definition of open standards in the European Interoperability Framework 2.0 is only the most recent example.
In other words, once again it is the closed-source incumbents that are the problem here, trying to tilt the playing-field in their favour, and to ensure that there are still costs associated with making open source work with their closed source offerings by insisting on licensing fees or other conditions (so how can that be called an "open" standard?)
These difficulties should be seen as another compelling reason for preferring open source and true open standards that permit the use of the main free software licence, the GNU GPL. This would ensure suppliers do indeed compete on a level playing-field, not one skewed by lock-in to proprietary, patented approaches. Open source solutions force proprietary vendors to cut their margins to the bone if they want to compete on these completely fair terms, and the customers benefit from this Darwinian selection process.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254882.18/warc/CC-MAIN-20190519121502-20190519143502-00327.warc.gz
|
CC-MAIN-2019-22
| 6,904
| 22
|
https://www.mail-archive.com/php-general@lists.php.net/msg245703.html
|
code
|
At 9:40 AM +0100 4/27/09, Richard Heyes wrote:
I know it's probably heresy for a lot of coders, but does anyone know a
function or class or regexp or somesuch which will scan through a PHP
file and convert all the CamelCase code into proper C-type code? That
is, "CamelCase" gets converted to "camel_case". I snagged a bunch of
someone else's PHP code I'd like to modify, but it's coded the wrong
way, so I'd like to fix it.
(I'm teasing you CamelCase people. But I really would like to change
this code around, because it doesn't happen to be my preference.)
I'd say, if you must, then change as you go. If you do it all in a
oner you'll likely introduce a shed load of problems.
Not only that. but it you leave each function alone until you work on
it, then you'll have at least some indication of where/when an error
has been introduced.
Many time when I'm using a piece of code that isn't formatted as I
like, I first get it to work as I want changing only the functions
that I must change. After everything is working, I then step through
the code, reformat, and change in small steps.
http://sperling.com http://ancientstones.com http://earthstones.com
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267162385.84/warc/CC-MAIN-20180925202648-20180925223048-00168.warc.gz
|
CC-MAIN-2018-39
| 1,257
| 21
|
https://www.experts-exchange.com/questions/21887455/2-Exchange-Servers-on-the-WAN-using-smarthost-with-problems.html
|
code
|
I have 2 Exchange servers, one in NY and one in ZH. I have user mailboxes on both servers and they communicated just fine within our WAN (via a VPN). Mails pass freely between the two sites without a problem. The users all have a standard email addresses, with multiple aliases. All user's email addresses are set to a single standard address of companyname.com.
Until yesterdayeverything was working fine. One of the servers got put on a RBL list and we went and addressed the problem and had the server removed from the list. During that time, however, we set up the server to send emails via a smarthost that our security providers had set up for us (for other firewall and vpn purposes). There was an interesting side effect which was that this server was set to route all traffice that goes to companyname.com to the NY server and all traffice that goes to companyname.ch to the Zurich server. Problem is, as mentioned above, all users in the company have default email address set in Active Directory as companyname.com. So all mails being sent to Zurich users were set to STMP queue and sending to zh-exchange.companyname.com (which is resolved correctly from the Exchange server, mind you). When the virtual SMTP queue hit the smarthost, the smart host saw that the address was companyname.com and rerouted the mails back down to the NY server. The STMP server queue never got an ack so it had the messages set to retry and that's how I found it this morning.
Removing the smarthost cleared the queue without problems.
My question: Is there a way to set the smarthost for external emails and set up a route for internal, WAN servers where mails will not be sent via smarthost first. As I understand it, addresses are resolved either via DNS or smarthost. If you define nothing, uses DNS, which is why this worked before. If you define smarthost, ALL messages travel out via the smtp connector through the smarthost.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863109.60/warc/CC-MAIN-20180619173519-20180619193519-00205.warc.gz
|
CC-MAIN-2018-26
| 1,923
| 4
|
https://community.miniprofiler.com/t/irc-channel-for-informal-quick-discussions/207
|
code
|
Discourse is awesome for sharing thoughts and paving ways.
However, I believe some smaller angles of ruby-bench would go faster with instant discussion.
A specific architectural point, or just to say someone is working on something in specific for example.
That’s why I just registered the ##ruby-bench channel on freenode.
Feel free to join.
@sam: right now, I’m the only one with admin access on that room.
Let me know your freenode username and I’ll add you.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574588.96/warc/CC-MAIN-20190921170434-20190921192434-00271.warc.gz
|
CC-MAIN-2019-39
| 467
| 7
|
http://community.fhir.org/t/securing-a-fhir-server/1509
|
code
|
My company is building a clinician-facing SMART on FHIR application which is launched from an EMR context. We’ve been using SMART’s dev sandbox (https://github.com/smart-on-fhir/smart-dev-sandbox) for our internal testing (no real patient data), but it doesn’t seem to have any real authentication or access restriction abilities built in to it. We’d like to use something that we can safely expose to the internet so our sales people, account managers, etc can demonstrate our product away from the company network.
We’ve recently tried out Azure’s FHIR service, but it doesn’t support r2 and doesn’t include things like a patient browser. We also use the sandboxes provided by the EMR vendors, but we need something within our company’s control for this.
We’re hoping to avoid the expense of adding a custom application layer for authentication.
Are there any recommendations for either securing the SMART dev sandbox or using another tool?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711108.34/warc/CC-MAIN-20221206124909-20221206154909-00057.warc.gz
|
CC-MAIN-2022-49
| 962
| 4
|
https://sofia.usgs.gov/eden/edenapps/depth-dayssincedry.php
|
code
|
Update: Version 2.0.1 is now available (February 2013), for PC and Mac. See installation information below.
EDEN Depth&DaysSinceDry is a program for creating daily surfaces (in NetCDF file format, .nc) of water depth and days since dry from EDEN daily water level surfaces and ground elevation model.
The daily surface of water depth is created by subtracting the ground elevation for the EDEN grid cell (400 meter by 400 meter cells) from the water level surface. The days since last dry indicates the number of consecutive days since the start of the time period that an EDEN grid cell surface has had a depth value greater than zero. A count of "0" indicates that the cell was wet for that day. Once the cell is dry, the count begins and continues until a wet day is encountered. When this happens, the count is returned to "0".
All water level data are output in units to North American Vertical Datum of 1988 (NAVD88).
Note: the latest version of the Depth&DaysSinceDry Tool was updated with new "Days Since Dry" calculations as well as the ability to select an output path for files. Please see the User's Guide (below) for more information.
Prior versions of the Depth&DaysSinceDry required the user to download NetCDF .dll files and the .NET Framework. This is no longer required, as the latest version of the Depth&DaysSinceDry is Java-based. All necessary files are included in the zip file (see below), except for the Java installation. Users will need to have 32-bit Java Virtual Machine (JVM) installed on their system before they can run the Depth&DaysSinceDry. The 64-bit JVM causes issues with the Depth&DaysSinceDry, so if you are running a 64-bit system, please ensure you have the 32-bit JVM installed, not the 64-bit one. Please contact us if you have questions.
User's Guide (pdf, 866 KB, updated July 10, 2012)
You will need the following files:
Required User Input Files:
|We're looking for feedback! Please contact us.|
The EDENapps are no longer available on CERPZone. Java-based versions of EDENapps are downloadable at: http://sofia.usgs.gov/eden/edenapps/index.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710533.96/warc/CC-MAIN-20221128135348-20221128165348-00435.warc.gz
|
CC-MAIN-2022-49
| 2,092
| 11
|
https://csdms.colorado.edu/csdms_wiki/index.php?title=AnalystExecutorSiwenna&oldid=221786
|
code
|
Instructions for installing and configuring a WMT executor on siwenna.
--Mpiper (talk) 15:40, 8 October 2018 (MDT)
Set install and conda directories
The install directory for this executor is /home/csdms/wmt/analyst.
install_dir=/home/csdms/wmt/analyst mkdir -p $install_dir
Make sure read and execute bits are set on this directory.
chmod 0775 $install_dir
Install a Python distribution to be used locally by WMT. We like to use Miniconda.
cd $install_dir curl https://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh -o miniconda.sh bash ./miniconda.sh -f -b -p $(pwd)/conda export PATH=$(pwd)/conda/bin:$PATH
If working with an existing Miniconda install, be sure to update everything before continuing:
conda update conda conda update --all
Install the CSDMS software stack
Clone the `wmt-executor-config-files` repo.
mkdir -p $install_dir/opt && cd $install_dir/opt git clone https://github.com/mdpiper/wmt-executor-config-files cd wmt-executor-config-files
Install the CSDMS software stack, including the `babelizer` for components that need to be built from source, in a new environment.
conda install --file wmt-analyst.txt -c csdms-stack -c defaults -c conda-forge
Note that the `netcdf-fortran` package needs to be wound back so that HDF5 1.8.18 is used because of problems with HDF5 1.10.1 and ESMF, used in PyMT (see discussion here). Also note I used a tagged version of PyMT because there have been a lot of changes to it since I built the conda recipes for the permamodel models.
Install executor software
Install the `wmt-exe` package from source.
mkdir -p $install_dir/opt && cd $install_dir/opt git clone https://github.com/csdms/wmt-exe cd wmt-exe python setup.py develop
Create a site configuration file that describes the executor and symlink it to the executor's etc/ diectory.
work_dir='/scratch/$USER/wmt/analyst' # note single quotes python setup.py configure --wmt-prefix=$install_dir --launch-dir=$work_dir --exec-dir=$work_dir ln -s "$(realpath wmt.cfg)" $conda_dir/envs/wmt-analyst/etc
Note that we're using /scratch for the launch and execution directories instead of the default ~/.wmt.
Install and test CSDMS components
Each section below describes how to install and test a particular CSDMS component.
Currently installed components:
Update (2018-10-08): CHILD is not working, so I've pulled it.
Note that when running IPython remotely it's helpful to set
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949025.18/warc/CC-MAIN-20230329182643-20230329212643-00405.warc.gz
|
CC-MAIN-2023-14
| 2,401
| 28
|
https://thanos.io/v0.16/components/sidecar.md/
|
code
|
thanos sidecar command runs a component that gets deployed along with a Prometheus instance. This allows sidecar to optionally upload metrics to object storage and allow Queriers to query Prometheus data with common, efficient StoreAPI.
It implements Thanos’ Store API on top of Prometheus’ remote-read API. This allows Queriers to treat Prometheus servers as yet another source of time series data without directly talking to its APIs.
Optionally, the sidecar uploads TSDB blocks to an object storage bucket as Prometheus produces them every 2 hours. This allows Prometheus servers to be run with relatively low retention while their historic data is made durable and queryable via object storage.
NOTE: This still does NOT mean that Prometheus can be fully stateless, because if it crashes and restarts you will lose ~2 hours of metrics, so persistent disk for Prometheus is highly recommended. The closest to stateless you can get is using remote write (which Thanos supports, see Receiver. Remote write has other risks and consequences, and still if crashed you loose in positive case seconds of metrics data, so persistent disk is recommended in all cases.
Optionally Thanos sidecar is able to watch Prometheus rules and configuration, decompress and substitute environment variables if needed and ping Prometheus to reload them. Read more about this in here
Prometheus servers connected to the Thanos cluster via the sidecar are subject to a few limitations and recommendations for safe operations:
The recommended Prometheus version is 2.2.1 or greater (including newest releases). This is due to Prometheus instability in previous versions as well as lack of
(!) The Prometheus
external_labels section of the Prometheus configuration file has unique labels in the overall Thanos system. Those external labels will be used by the sidecar and then Thanos in many places:
--web.enable-admin-api flag is enabled to support sidecar to get metadata from Prometheus like external labels.
--web.enable-lifecycle flag is enabled if you want to use sidecar reloading features (
If you choose to use the sidecar to also upload data to object storage:
--storage.tsdb.max-block-duration must be set to equal values to disable local compaction on order to use Thanos sidecar upload, otherwise leave local compaction on if sidecar just exposes StoreAPI and your retention is normal. The default of
2h is recommended. Mentioned parameters set to equal values disable the internal Prometheus compaction, which is needed to avoid the uploaded data corruption when Thanos compactor does its job, this is critical for data consistency and should not be ignored if you plan to use Thanos compactor. Even though you set mentioned parameters equal, you might observe Prometheus internal metric
prometheus_tsdb_compactions_total being incremented, don’t be confused by that: Prometheus writes initial head block to filesytem via its internal compaction mechanism, but if you have followed recommendations - data won’t be modified by Prometheus before the sidecar uploads it. Thanos sidecar will also check sanity of the flags set to Prometheus on the startup and log errors or warning if they have been configured improperly (#838).
Thanos can watch changes in Prometheus configuration and refresh Prometheus configuration if
You can configure watching for changes in directory via
Thanos sidecar can watch
--reloader.config-file=CONFIG_FILE configuration file, replace environment variables found in there in
$(VARIABLE) format, and produce generated config in
thanos sidecar \
--tsdb.path "/path/to/prometheus/data/dir" \
--prometheus.url "http://localhost:9090" \
The example content of
If you want to migrate from a pure Prometheus setup to Thanos and have to keep the historical data, you can use the flag
--shipper.upload-compacted. This will also upload blocks that were compacted by Prometheus. Values greater than 1 in the
compaction.level field of a Prometheus block’s
meta.json file indicate level of compaction.
To use this, the Prometheus compaction needs to be disabled. This can be done by setting the following flags for Prometheus:
usage: thanos sidecar [<flags>]
Sidecar for Prometheus server
-h, --help Show context-sensitive help (also try
--help-long and --help-man).
--version Show application version.
--log.level=info Log filtering level.
--log.format=logfmt Log format to use. Possible options: logfmt or
Path to YAML file with tracing configuration.
See format details:
Alternative to 'tracing.config-file' flag
(lower priority). Content of YAML file with
tracing configuration. See format details:
Listen host:port for HTTP endpoints.
--http-grace-period=2m Time to wait after an interrupt received for
Listen ip:port address for gRPC endpoints
(StoreAPI). Make sure this address is routable
from other components.
--grpc-grace-period=2m Time to wait after an interrupt received for
--grpc-server-tls-cert="" TLS Certificate for gRPC server, leave blank to
--grpc-server-tls-key="" TLS Key for the gRPC server, leave blank to
TLS CA to verify clients against. If no client
CA is specified, there is no client
verification on server side. (tls.NoClientCert)
URL at which to reach Prometheus's API. For
better performance use local network.
Maximum time to wait for the Prometheus
instance to start up
Controls the http MaxIdleConns. Default is 0,
which is unlimited
Controls the http MaxIdleConnsPerHost
--tsdb.path="./data" Data directory of TSDB.
--reloader.config-file="" Config file watched by the reloader.
Output file for environment variable
substituted config file.
Rule directories for the reloader to refresh
Controls how often reloader re-reads config and
Controls how often reloader retries config
reload in case of error.
Path to YAML file that contains object store
configuration. See format details:
Alternative to 'objstore.config-file' flag
(lower priority). Content of YAML file that
contains object store configuration. See format
If true shipper will try to upload compacted
blocks as well. Useful for migration purposes.
Works only if compaction is disabled on
Prometheus. Do it once and then disable the
flag when done.
Start of time range limit to serve. Thanos
sidecar will serve only metrics, which happened
later than this value. Option can be a constant
time in RFC3339 format or time duration
relative to current time, such as -1d or 2h45m.
Valid duration units are ms, s, m, h, d, w, y.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817729.0/warc/CC-MAIN-20240421040323-20240421070323-00608.warc.gz
|
CC-MAIN-2024-18
| 6,435
| 83
|
https://modulargrid.net/e/industrial-music-electronics-piston-honda-mkii
|
code
|
This Module is discontinued.
HM1991mk2 - Wavetable Oscillator
The heavyweight champion wavetable oscillator returns. 4,096 waveforms are arranged in a 3-dimensional cube, with voltage control over all three axis positions, smoothly morphing. The morph can optionally be disabled per axis, or have its resolution adjusted. A waveshaping unit can simultaneously process external audio through the currently selected wavetable while the internal oscillator runs, with voltage-controlled gain. The wavetable selection is controlled by three smoothly traveling illuminated sliders with a numerical display indicating your current location in the "cube".
The internal waveform memory comes fully loaded, and the last six locations may be overwritten with the user's custom waveforms using the classic Piston Honda Expander board.
|Will ship free within US. Excellent condition!
|Fully working condition, some usual rack rash. Comes with power cable...
|The overall condition of this *Industrial Music Electronics Piston Honda...
52 Users are observing this
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819089.82/warc/CC-MAIN-20240424080812-20240424110812-00643.warc.gz
|
CC-MAIN-2024-18
| 1,050
| 8
|
https://gbatemp.net/threads/it-starting-points-and-questions-about-certifications.107031/
|
code
|
I am soon going to be looking to finally getting into IT jobs like i always should have. I am currently living in Michigan where if you read about this sorta thing jobs blow. I am in about a month going to move to Virginia for a number of reason one being the availability of these sorts of jobs, especially of the starting variety. I do have a high school degree and some college. I have a very good understanding of computers windows and networks. I want to begin to work on getting certifications and finish up school and all that. I have already began to apply for jobs as well. What i want to know is what the pecking order should be for certifications and what i can be doing on my own to make sure i am able to first get a job and ultimately make a successful career out of this. Thanks in advance for any help in this question i trust this community to help me in this endeavor. sorry about the semi wall of text!!!
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267868237.89/warc/CC-MAIN-20180625170045-20180625190045-00061.warc.gz
|
CC-MAIN-2018-26
| 923
| 1
|
https://talk.collegeconfidential.com/michigan-state-university/1703131-michigan-state-early-action.html
|
code
|
Introducing a New Expert Content Section: Careers
Michigan State early action?
I know that the early action for MSU is Nov 1 and my application was submitted on Oct 31 since I just really wanted to apply there all of a sudden. My ACT scores and transcript was sent electronically that day too. I'm just wondering if I made the deadline of maximum scholarship consideration since they still have to process the stuff even though I submitted my app on Oct 31?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215487.79/warc/CC-MAIN-20180820003554-20180820023554-00662.warc.gz
|
CC-MAIN-2018-34
| 457
| 3
|
https://forta.com/2005/08/03/flex-apps-minus-the-server/
|
code
|
Mike Chambers just shared some really important information about Zorn, and the ability to create MXML/AS applications without needing a Flex server.
Just to make one thing clear, certain features of the Flex framework (in particular those focused toward enterprise level development) will still require the Flex server.
Good point Mike. I deliberately did not post specifics here, if you want to know more visit the link to Mike’s blog that I posted.
I’m in some way assuming that this is in response to OpenLaszlo 3 with it’s new SOLO deployment feature.
For those that don’t know, Laszlo is the open source rival to Flex and have a feeling that they will be hopping over each other on features.
I noted that this is an Eclipse tool, does this mean you are moving away from Dream Weaver, or am I just using the wrong tool ;). I have never been much of an eclipse fan, I can’t stand unpolished interfaces…
Eclipe is amazing. You should have a look at the CFeclipse plugin.
Yea, I tried it, and it is alright. There were just a couple things wrong…
1. Eclipse is primarily used for Java, although you can use plugins all the java stuff is in there.
2. The interface is ass ugly 😉 although that is more of an opinion.
3. I couldn’t find an FTP plugin. This was probably my biggest issue. I am just too use to hitting ctrl+s and checking the results in my browser or hitting f12 to the same affect.
I think if there was a FTP plugin, it wouldn’t be such a big deal.
This is eclipse 3.0 + yes?
It also sounds like you’re coding using EPIAI (Every page is an Island). You should consider learning Fusebox 4 or Mach-II, although I prefer Model-Glue as this is easier for CF developers to understand.
Eclipse is becoming the IDE of choice for many developers. Even Macrobe are pushing it. It’s an industry standard that a lot of software companies are developing plug-ins for. You need to know this tool.
Being Java centric is not exactly true. Yes it’s roots are in Java development, but the plugins target the environment to the technology of your choice.
Then again CF is Java these days…
Actually, I haven’t tried 3.0…
I am using FuseBox 4.1 for the bigger sites, but if I am trying out something new like CF flash forms I just do it on one page… I haven’t had a chance to try out Mach-II, mainly because I couldn’t find a book to jump into.
I actually wish I could pull out all the unnecessary crap I don’t use, but I never got into it long enough to try 🙂
I thought about learning java but I have been working hard with my C# stuff so that won’t happen any time soon… although my java for CF devs book is on my bookshelf collecting dust, maybe I should take a look at it 😉
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100551.17/warc/CC-MAIN-20231205105136-20231205135136-00641.warc.gz
|
CC-MAIN-2023-50
| 2,723
| 21
|
https://www.localresultmarketing.com/blog/a-foggy-day-with-facebook-having-issues
|
code
|
So Facebook is down and seemingly much more on the business side. I thought I would take a quick moment to make an introductory video and let you know how I could possibly help you out.
My service is for people that believe in the fact that the internet is a key driver for local traffic and wealth generation.
I focus on people that want to grow or solidify their business to provide for themselves, family and co-workers
I promise that working with me will save you time, frustration and bring your closer to your goals.
If this sounds like you, let's have a 10 minute chat and see if this might work out for the both of us.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027331228.13/warc/CC-MAIN-20190826064622-20190826090622-00514.warc.gz
|
CC-MAIN-2019-35
| 626
| 5
|
https://play.google.com/store/apps/details?id=jmc.android.pistasGratis&referrer=utm_source%3Dappbrain
|
code
|
Clues is an Android adaptation of the well-known television game Password. As Password, Clues is a game designed to be played in person with friends and/or family.
- It can be played in both English and Spanish.
- It contains more than 3100 words in English and more than 3700 in Spanish. Therefore, you can play for hours without getting a single word repeated.
- Up to 6 teams can play a game simultaneously.
- The difficulty of a game can be configured by modifying the number of secret words per round and the available time to guess such words.
- The graphical interface is simple, clean and easy to use. Besides, the game includes rules about how to play and offers an interactive help that can be activated from the Options menu.
- It is integrated with the Heyzap social network.
How to play
Playing this game requires at least 2 teams of at least 2 players each. The maximum number of teams that can play simultaneously is 6.
The game is divided in rounds and each round is played by one team: firstly team 1, secondly team 2...
At each round, a member of the playing team receives a set of secret words. By means of clues,
the player must try to make his team-mates guess the secret words within the available time (if it exists). In particular, for each word received, the player gives his team-mates one clue: a word (in English) that doesn't belong to the same lexical family as the secret word (for example, if the secret word is "bakery", the clue "baker" isn't valid). The player's team-mates, taking into account the clue, try to guess the secret word. If they fail, the player gives them another clue, having so a second chance to guess the secret word. If they fail again, the player team gives them a third clue. If after the third clue they don't manage to guess the secret word, it is considered failed. Otherwise, it is considered guessed.
The number of secret words that a player receives in a round and the available time to guess such words depend on the configuration of the game (by default, 5 secret words and 30 seconds per word).
When all the teams have played the same number of rounds, the team with the largest number of successful guesses wins.
If you want to support the Clues project and remove the game ads, get the full version for less than 1€.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945459.17/warc/CC-MAIN-20180421223015-20180422003015-00119.warc.gz
|
CC-MAIN-2018-17
| 2,286
| 15
|
https://docs.otc.t-systems.com/en-us/usermanual/bms/en-us_topic_0173720389.html
|
code
|
You can restart BMSs on the console. Only BMSs in running state can be restarted.
Restarting a BMS will interrupt your services. Exercise caution when performing this operation.
Log in to the management console.
Under Computing, click Bare Metal Server.
The BMS console is displayed.
Locate the row that contains the target BMS, click More in the Operation column, and select Restart from the drop-down list. To restart multiple BMSs, select them and click Restart at the top of the BMS list.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670036.23/warc/CC-MAIN-20191119070311-20191119094311-00028.warc.gz
|
CC-MAIN-2019-47
| 492
| 6
|
https://slashdot.org/~watice
|
code
|
idk if it's been stated before, but the court order sounds more like "help us brute force the key" not "help us decrypt the data". I guess philosophically it's one in the same, but slightly technically different. Here's the exact text.
important functions: (1) it will bypass or disable the auto-erase function whether or not it has been enabled; (2) it will enable the FBI to submit passcodes to the SUBJECT DEVICE for testing electronically via the physical device port, Bluetooth, Wi-Fi, or other protocol available on the SUBJECT DEVICE and (3) it will ensure that when the FBI submits passcodes to the SUBJECT DEVICE, software running on the device will not purposefully introduce any additional delay between passcode attempts beyond what is incurred by Apple hardware.
Apple's reasonable technical assistance may include, but is not limited to: providing the FBI with a signed iPhone Software file, recovery bundle, or other Software Image File ("SIF") that can be loaded onto the SUBJECT DEVICE. The SIF will load and run from Random Access Memory and will not modify the iOS on the actual phone, the user data partition or system partition on the device's flash memory. The SIF will be coded by Apple with a unique identifier of the phone so that the SIF would only load and execute on the SUBJECT DEVICE. The SIF will be loaded via Device Firmware Upgrade ("DFU") mode, recovery mode, or other applicable mode available to the FBI. Once active on the SUBJECT DEVICE, the SIF will accomplish the three functions specified in paragraph 2. The SIF will be loaded on the SUBJECT DEVICE at either a government facility, or alternatively, at an Apple facility; if the latter, Apple shall provide the government with remote access to the SUBJECT DEVICE through a computer allowing the government to conduct passcode recovery analysis.
If Apple determines that it can achieve the three functions stated above in paragraph 2, as well as the functionality set forth in paragraph 3, using an alternate technological means from that recommended by the government, and the government concurs, Apple may comply with this Order in that way.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120881.99/warc/CC-MAIN-20170423031200-00383-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 2,135
| 4
|
https://www.sqlskills.com/blogs/bobb/musings-from-the-immersion-event/
|
code
|
Just got back from the SQLskills Immersion event in Chicago. There were a couple of bonus sessions, with Kimberly doing a high-availability talk for Microsoft customers and our students one evening, and Joe Celko making an appearance on Wedsnesday. Joe spoke on elements of SQL style, and I was able to acquire a signed copy of his new (but not his latest) book "Joe Celko's SQL Programming Style". He's also just released a book on analytics since the style book.
With all the weary smiles at dinner on Thursday, I got the idea that everyone in attendence had a good time and was full to overflowing with the topics we presented. If this sounds interesting to you, there are some spaces in our October event in NYC.
An interesting question that came up at dinner was "how do you motivate use of the Service Broker in SQL Server and how mainstream are the use cases"? Here's how.
If you've ever purchased anything on the web, you'll notice that no matter how popular the website, once you navigate through the catalog, fill out the forms with your personal, shipping, and credit info, the actual order screen is quite quick. You get back an acknowledgement very quickly, but it usually only consists of an "echo" of a subset of the data you entered and an order ID. Also, there is likely a hyperlink where you can check the status of your order. Perhaps an email is sent.
Bet they didn't do a credit check, set up billing, shipping, check inventory, and consult the MRP system for a manufacturing schedule while you were waiting, did they? Most or all of this is done asynchronously, probably with queued messages of some sort. Otherwise, it would be quite a transaction, and if all subsystems weren't on the same instance, a (slower) distributed transaction at that.
Generating an order number, saving the order details (maybe in XML format, for later decomposition into the relevent relational tables), and updating/checking the customer table is much faster. And, if the queueing system is inside the database, your queued messages and database updates will be a fast *local* transaction. If you need to save state, you're already in the "state machine/DBMS", rather than one tier or more away. BTW, if you're using SQL Server's Database Mail feature, the email is also sent asynchronously using Service Broker. Gotta save on that synchronous distributed processing. Else you'll get impatient and push "Buy" again. Or perhaps not return next purchase.
That's the motivation.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473738.92/warc/CC-MAIN-20240222093910-20240222123910-00420.warc.gz
|
CC-MAIN-2024-10
| 2,477
| 7
|
http://www.reddit.com/r/rails/comments/1vtecx/how_do_you_go_about_removing_the_primary_key_id/?sort=controversial
|
code
|
Is there a way to create the model/migration without the primary key? I dont particularly want to do away with the id column, i just dont want the primary key tied to it. How would i generate a model to remove this? PS. i know how to do this in mysql i just dont understand how to do it with activerecord.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274866.27/warc/CC-MAIN-20140728011754-00457-ip-10-146-231-18.ec2.internal.warc.gz
|
CC-MAIN-2014-23
| 305
| 1
|
https://mail.python.org/pipermail/new-bugs-announce/2009-May/004891.html
|
code
|
[New-bugs-announce] [issue6008] Idle should be installed as `idle3.1` and not `idle3`
report at bugs.python.org
Wed May 13 02:49:53 CEST 2009
New submission from Sridhar Ratnakumar <sridharr at activestate.com>:
In Python2.x, Idle is installed as idle2.x. This is the case with Linux
However, in Py3.1b2, Idle is installed as `idle3`.
Expected script name is `idle3.1`.
title: Idle should be installed as `idle3.1` and not `idle3`
versions: Python 3.1
Python tracker <report at bugs.python.org>
More information about the New-bugs-announce
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649741.26/warc/CC-MAIN-20230604093242-20230604123242-00732.warc.gz
|
CC-MAIN-2023-23
| 539
| 11
|
https://www.itonics-innovation.com/knowledge/2020-06-05-power-search-improvement-and-bug-fixes
|
code
|
The Workspace is now encoded in the URL making views fully shareable. Additionally, URLs of search fields that include the property wildcard (*) in the power search can now also be shared.
The CONTAINS operator is now fully functional in the power search.
When zooming into a bubble in the cluster visualization, you will now see the total count of the documents and the count of the documents that are in the children only. The difference in documents is assigned to the current bubble only.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646652.16/warc/CC-MAIN-20230610233020-20230611023020-00143.warc.gz
|
CC-MAIN-2023-23
| 492
| 3
|
https://americasoftwarejobs.com/linux-training-certification/introduction-to-c-and-linux-linux-training-certification/
|
code
|
Best Linux Certification! Introduction To C++ And Linux. Verifiable certifications with The Linux Foundation. Top certifications CompTia Linux+ is the only Linux certification with the foundational skills required.
Linux Academy – Introduction To C++ And Linux
Linux Academy is a company of online training courses and certifications for Linux and cloud-related technologies. Introduction to c++ and linux. Their training courses cover a variety of topics, consisting of Linux administration, AWS, Azure, Google Cloud, and also more. They likewise use hands-on labs as well as tests to help learners check their understanding as well as skills. Introduction to c++ and linux. Linux Academy’s goal is to offer premium training as well as sources to assist individuals and also companies advance their occupations as well as enhance their IT infrastructure.
Linux Foundation – Introduction To C++ And Linux
The Linux Foundation is a charitable organization that is dedicated to advertising, safeguarding, and progressing the Linux operating system and also open-source software program. The Linux Foundation hosts a selection of tasks and also initiatives, including the Linux kernel, the Linux operating system, and the Open Network Automation Platform.
Linux Certification – Introduction To C++ And Linux
Linux Certification is a Certification program that verifies an person’s expertise and skills in using the Linux operating system as well as its associated modern technologies. Introduction to c++ and linux. These accreditations are provided by different companies, consisting of the Linux Professional Institute (LPI), the Red Hat Certified Engineer (RHCE) program, as well as the CompTIA Linux+ Certification.
Introduction to c++ and linux. The Linux Professional Institute (LPI) provides several degrees of Linux Certification, including LPIC-1 Junior Linux Professional), LPIC-2 (Advanced Level Linux Professional), and also LPIC-3 (Senior Level Linux Professional). These qualifications check an person’s understanding of Linux administration, system configuration, and troubleshooting.
The Red Hat Certified Engineer (RHCE) program is a certification program offered by Red Hat, a leading company of enterprise-level Linux solutions. The RHCE Certification validates an individual’s skills in configuring, taking care of, as well as repairing Red Hat Linux systems.
The CompTIA Linux+ Certification is a vendor neutral certification that evaluates an individual’s understanding of Linux administration, system setup, as well as troubleshooting. Introduction to c++ and linux. This Certification is acknowledged by many companies as a indication of effectiveness in Linux.
In summary, Linux Certification is a important possession for IT specialists aiming to advance their jobs in Linux administration, system setup, and also troubleshooting. These qualifications are acknowledged by companies and can boost an individual’s making possibility.
Which is the most effective Linux Certification?
The most effective Linux Certification relies on the individual’s objectives and demands. Some preferred Linux certifications consist of:
1. Linux Professional Institute Certification (LPIC) – This Certification is identified worldwide and also covers a vast array of Linux topics consisting of system administration, networking, as well as safety. Introduction to c++ and linux.
2. Red Hat Certified Engineer (RHCE) – This certification is specific to Red Hat Linux and is acknowledged as a very respected Certification in the industry.
3. CompTIA Linux+ – This Certification covers Linux basics, command-line abilities, as well as system administration. It is vendor neutral and is a great starting point for those new to Linux.
4. Oracle Linux Certified Administrator (OCA) – This Certification is specific to Oracle Linux and also covers topics such as system administration, safety and security, and troubleshooting.
Inevitably, the best Linux Certification will rely on the individual’s job objectives as well as the type of Linux setting they will be working in.
Which is the very best Linux Training? – Introduction To C++ And Linux
Introduction to c++ and linux. There are several Linux training choices offered, as well as the very best one for you will depend on your certain needs and discovering style. Some preferred Linux training choices include:.
1. Linux Professional Institute (LPI) Certification: This is a commonly acknowledged Linux Certification that covers various Linux related subjects such as system administration, network monitoring, and also protection.
2. Linux Foundation Certified System Administrator (LFCS): This Certification is developed for system managers who wish to get hands-on experience with Linux.
3. Linux Academy: This is an on-line Linux training Platform that uses numerous Linux courses, including those for system administrators, programmers, and also network administrators.
4. Red Hat Certified Engineer (RHCE): This Certification is specific to Red Hat Linux and also focuses on system administration and also network management.
5. Udemy Linux training courses: Introduction to c++ and linux. Udemy provides a large range of Linux courses for beginners and progressed customers, consisting of Linux for novices, Linux command line, and also Linux web server administration.
Inevitably, the best Linux training for you will certainly depend on your knowing style, the particular Linux distribution you will certainly be making use of, and also the type of Linux-related task you are aiming for.
The Linux Foundation Provides Suite Opensource.
The Linux Foundation, a charitable company devoted to advertising as well as supporting open resource software, has revealed the release of a suite of open source tools as well as sources for developers. Introduction to c++ and linux. The suite includes a variety of tools for software development, testing, and release, along with sources for learning and also partnership.
The suite includes prominent open resource projects such as the Linux operating system, the Apache web server, and also the MySQL database. Introduction to c++ and linux. It additionally consists of devices for software growth and also screening, such as the Eclipse integrated development environment (IDE) as well as the Jenkins continual assimilation and also delivery Platform.
In addition to the devices, the suite also includes resources for discovering as well as collaboration, such as on the internet tutorials, webinars, as well as forums. Introduction to c++ and linux. The Linux Foundation really hopes that this suite will make it less complicated for programmers to access the resources they need to develop, test, and also deploy open source software program.
The Linux Foundation also offers training and Certification programs for designers, which can help them to learn about open source software application and gain the skills they require to work with it.
In general, the Linux Foundation’s suite of open source tools as well as sources is a useful resource for programmers seeking to work with open source software application. With a series of tools and also sources readily available, designers can easily access the devices and resources they require to develop, test, and deploy open resource software program.
Amazon.com Lumberyard Linux Foundation Open Source.
Amazon Lumberyard is a cost-free, cross-platform game engine created by Amazon Internet Provider (AWS) and based upon CryEngine. It is made for game designers of all levels, from newbies to skilled professionals, and also includes functions such as a aesthetic scripting system, a integrated physics engine, and assistance for virtual reality (VR) and increased reality (AR) development. Introduction to c++ and linux.
The Linux Foundation is a charitable company that promotes using open-source software program and sustains the Linux operating system. Introduction to c++ and linux. It hosts and supports a wide variety of open-source tasks, consisting of the Linux kernel, Kubernetes, as well as Hyperledger.
Open source refers to a type of software program that is openly readily available for any person to use, customize, and distribute. It is typically developed and maintained by a community of volunteers and is typically accredited under terms that enable collaboration as well as sharing. Instances of open-source software application consist of Linux, Apache, and also Mozilla Firefox.
What is the future extent of Linux Certification course? Introduction To C++ And Linux
The future scope of Linux Certification courses is extremely encouraging as the demand for Linux professionals| is enhancing in different industries such as IT, cloud computer, information facility management, and cybersecurity. Introduction to c++ and linux. Linux is considered to be a steady, protected, as well as cost-efficient operating system as well as is extensively made use of in business environments.
As a growing number of firms are relocating in the direction of cloud computer and also data facility administration, the demand for Linux professionals| that can handle as well as preserve these systems is enhancing. Linux is also commonly utilized in the field of cybersecurity as well as is considered to be a secure operating system.
Additionally, Linux is likewise being made use of in the Internet of Things (IoT) as well as ingrained systems, which are growing rapidly. Introduction to c++ and linux. This produces a substantial demand for Linux professionals| who can develop and keep these systems.
On the whole, the future scope of Linux Certification training courses is very brilliant as the demand for Linux professionals| is boosting in various sectors, and the chances for Linux professionals| are expected to expand in the future.
What should my initial Linux Certification be to obtain an beginning setting? Introduction To C++ And Linux
The Linux Professional Institute Certification (LPIC-1) is a widely recognized entry-level Linux Certification that is commonly required for entry-level Linux positions. Introduction to c++ and linux. It covers standard Linux administration jobs as well as ideas, consisting of installation and also configuration, system maintenance, and fundamental networking. Acquiring LPIC-1 Certification will certainly show your understanding of Linux as well as your ability to execute standard administration tasks, making you a strong candidate for entry-level Linux positions.
Which Linux Certification is better? Linux Foundation or LPIC?
Both Linux Foundation and also LPIC (Linux Professional Institute Certification) provide different types of Linux accreditations that cater to different ability levels and job courses. Introduction to c++ and linux. It inevitably depends upon your goals and what you wish to attain with your Certification.
The Linux Foundation uses accreditations such as:.
• Linux Foundation Certified Engineer (LFCE) – This Certification is targeted at seasoned Linux professionals| as well as system administrators that wish to show their knowledge in Linux administration and also troubleshooting.
• Linux Foundation Certified System Administrator (LFCS) – This Certification is developed for system managers who wish to demonstrate their abilities in taking care of Linux systems.
On the other hand, LPIC supplies accreditations such as:.
• LPIC-1 – This Certification is targeted at entry-level Linux professionals| and covers fundamental Linux administration and also troubleshooting. Introduction to c++ and linux.
• LPIC-2 – This Certification is aimed at experienced Linux professionals| as well as covers innovative Linux administration as well as troubleshooting.
Both Linux Foundation and LPIC accreditations are widely identified and also appreciated in the industry. Introduction to c++ and linux. Ultimately, the very best Certification for you will certainly depend on your existing abilities, occupation goals, and the sorts of roles you want going after. It’s suggested to research both certifications and establish which one aligns ideal with your career desires.
Which Linux Certification is better to get a job in Leading Tech firms?
There are several Linux accreditations that are acknowledged and respected by leading technology business. Introduction to c++ and linux. Several of the most popular and also extensively identified certifications consist of:.
1. LPI Linux Professional Institute Certification (LPIC) – This Certification is acknowledged by major Linux vendors such as Red Hat, SUSE, as well as Canonicals. It covers a wide series of Linux topics, consisting of setup, system administration, and also safety and security.
2. Red Hat Certified Engineer (RHCE) – This Certification specifies to Red Hat Linux and focuses on advanced system administration and safety and security. It is highly valued by firms that use Red Hat Linux as their primary operating system.
3. CompTIA Linux+ – This Certification is vendor neutral and also covers a wide range of Linux subjects, including installment, system administration, and also safety and security. It is recognized by major Linux vendors such as Red Hat, SUSE, as well as Canonicals.
Eventually, the best Linux Certification to get a job in top technology business will depend upon the particular task requirements as well as the company’s recommended Linux distribution. It’s important to look into the certain qualifications that remain in demand by the companies you have an interest in benefiting. Introduction to c++ and linux.
What is the significance of Red Hat Linux accreditations?
Red Hat Linux accreditations are necessary for a number of factors:.
1. They show competence as well as understanding in the usage and management of Red Hat Linux systems. Introduction to c++ and linux.
2. They are identified as well as valued in the IT sector as a indication ofProfessional capability.
3. They can open new profession chances, such as system administration or IT management functions.
4. They provide a one-upmanship in the job market, as numerous employers choose to hire people with certifications.
5. They can cause higher salaries and also much better task advantages.
6. They supply a means for experts to remain present with the current technologies as well as innovations in the field.
7. They can help with continuing education and learning andProfessional advancement.
In General, Red Hat Linux accreditations can provide a variety of advantages for professionals in the IT market, including boosted expertise, profession chances, and also boosted making possibility.
Linux Academy is a provider of online training programs and also qualifications for Linux and also cloud-related innovations. Introduction to c++ and linux. The Linux Foundation organizes a variety of projects as well as initiatives, consisting of the Linux kernel, the Linux operating system, as well as the Open Network Automation Platform. CompTIA Linux+ – This Certification covers Linux fundamentals, command-line abilities, and also system administration. The Linux Professional Institute Certification (LPIC-1) is a commonly identified entry-level Linux Certification that is usually needed for entry-level Linux positions. Getting LPIC-1 Certification will show your understanding of Linux and your capacity to carry out fundamental administration jobs, making you a solid candidate for entry-level Linux positions.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100534.18/warc/CC-MAIN-20231204182901-20231204212901-00377.warc.gz
|
CC-MAIN-2023-50
| 15,496
| 69
|
https://forum.skritter.com/t/need-help-to-setup-wacom-bamboo-pen-to-do-chinese-writing-with-skritter/960
|
code
|
I am a new subscriber to Skritter. I came to Skritter through the recommendation of hacking Chinese.
I can recognize more than 1,000 Chinese characters, but I really want to learn to write them so i can enlarge my Chinese vocabulary to the 2,500 and 3,500 characters level.
But I am having the hardest time setting up the Bamboo Pen to work with Skritter hand writing input.
I would really appreciate any help from others who can suggest a way to accomplish this.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036176.7/warc/CC-MAIN-20220625220543-20220626010543-00478.warc.gz
|
CC-MAIN-2022-27
| 463
| 4
|
http://citrusframework.org/samples/databind/
|
code
|
This sample demonstrates the usage of object mapping in Citrus. We are able to handle automatic object mapping when sending and receiving message payloads. Read about this feature in reference guide
The todo-list sample application provides a REST API for managing todo entries. We call this API with object mapping in Citrus so that we do not need to write message payload JSON or XML structures but use the model objects directly in our test cases.
In test cases we can use the model objects directly as message payload.
As you can see we are able to send the model object. Citrus will automatically convert the object to a application/json message content as POST request. In a receive action we are able to use a mapping validation callback in order to get access to the model objects of an incoming message payload.
The validation callback gets the model object as first method parameter. You can now add some validation logic with assertions on the model.
You can run the sample on your localhost in order to see Citrus in action. Read the instructions how to run the sample.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937016.16/warc/CC-MAIN-20180419184909-20180419204909-00180.warc.gz
|
CC-MAIN-2018-17
| 1,081
| 6
|
https://bugs.jquery.com/ticket/7067
|
code
|
Incorrect effects-unit-test could conceal failures
|Reported by:||bugbegone||Owned by:|
|Keywords:||fx unit test||Cc:|
In test/unit/effect.js on Line 519 the superfluous
notEqual(jQuery.css(this, "height") + "px", old_h, ....
should be removed.
css(...) returns the value with units intact anyway. Thus if jQuery.css(this, "height") and old_h should happen to have the same value the notEqual-check can't catch this because the appended px string causes the values to always be different.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123102.83/warc/CC-MAIN-20170423031203-00536-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 488
| 7
|
https://clydecarey.medium.com/livestreamed-two-bands-d3419f9c9376?source=post_internal_links---------3----------------------------
|
code
|
…and still had time for a most productive day of software!
Tonight, as I often do, I ran two bands thru my regular audio recording chain, but livestreamed it as a show. For years, I have been streaming recording education, various how to-s, soldering adventures and pretty much anything audio related that I was doing live at the time. This pandemic put me in a special place, having been doing this very thing for so long, and its great to give back to the music community who is starving to play, for fans starving to see anything resembling live music.
The software being used tonight, REAPER Digital Audio Workstation, is pretty much my claim to fame in the audio software world. I had actually become a known quantity earlier on for other software, like impulses, and especially pioneering the use of impulse responses AS speaker simulators, which is a whole industry now, but like every other endeavor I embark upon, I never really do the business side. And today of course I have aiXDSP to be proud of, but I’ve been a part of a lot lot lot lot more audio industry software than most know about.
I’ve always been interested in tools that get you OUT of thinking about the tool itself. Always trying for the quickest path between what you hear in your head, and actually making that sound come out of the speakers in front of you. What types of gizmos can I make, so that the recording process becomes more than just an extension of your body but something you can actually think “in” rather than think about.
And there we are in software. I want to be able to think in code. I don’t want to think about syntax, or rules. I want to apply propositional logic and other tools to make things happen intuitively with programming. I’m far from there, but today, I felt a little like that solder jockey 30 years ago, running for food and making coffee for the likes of Rick Rubin, Max Norman, and John Hampton.
There’s a lot of cleanup to do, and tons of better ways to do it, but today in core programming, I got a working version of:
New Enemy Movement
Aggressive Enemy Type
and Avoid Enemy Shot!
Despite days where I really feel down in the dumps about progress, today really felt like I was getting places, and the code just came flowing out….I can’t give a lecture in this language yet, but I can certainly ask where the restroom is.
And especially understand the answer people would give to the question.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056578.5/warc/CC-MAIN-20210918214805-20210919004805-00011.warc.gz
|
CC-MAIN-2021-39
| 2,430
| 11
|