repo stringlengths 5 92 | file_url stringlengths 80 287 | file_path stringlengths 5 197 | content stringlengths 0 32.8k | language stringclasses 1 value | license stringclasses 7 values | commit_sha stringlengths 40 40 | retrieved_at stringdate 2026-01-04 15:37:27 2026-01-04 17:58:21 | truncated bool 2 classes |
|---|---|---|---|---|---|---|---|---|
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/email_agent.rb | app/models/agents/email_agent.rb | require 'net/smtp'
module Agents
class EmailAgent < Agent
include EmailConcern
can_dry_run!
default_schedule "never"
cannot_create_events!
no_bulk_receive!
description <<~MD
The Email Agent sends any events it receives via email immediately.
You can specify the email's subject line by providing a `subject` option, which can contain [Liquid](https://github.com/huginn/huginn/wiki/Formatting-Events-using-Liquid) formatting. E.g.,
you could provide `"Huginn email"` to set a simple subject, or `{{subject}}` to use the `subject` key from the incoming Event.
By default, the email body will contain an optional `headline`, followed by a listing of the Events' keys.
You can customize the email body by including the optional `body` param. Like the `subject`, the `body` can be a simple message
or a Liquid template. You could send only the Event's `some_text` field with a `body` set to `{{ some_text }}`.
The body can contain simple HTML and will be sanitized. Note that when using `body`, it will be wrapped with `<html>` and `<body>` tags,
so you do not need to add these yourself.
You can specify one or more `recipients` for the email, or skip the option in order to send the email to your
account's default email address.
You can provide a `from` address for the email, or leave it blank to default to the value of `EMAIL_FROM_ADDRESS` (`#{ENV['EMAIL_FROM_ADDRESS']}`).
You can provide a `content_type` for the email and specify `text/plain` or `text/html` to be sent.
If you do not specify `content_type`, then the recipient email server will determine the correct rendering.
Set `expected_receive_period_in_days` to the maximum amount of time that you'd expect to pass between Events being received by this Agent.
MD
def default_options
{
'subject' => "You have a notification!",
'headline' => "Your notification:",
'expected_receive_period_in_days' => "2"
}
end
def working?
received_event_without_error?
end
def receive(incoming_events)
incoming_events.each do |event|
recipients(event.payload).each do |recipient|
SystemMailer.send_message(
to: recipient,
from: interpolated(event)['from'],
subject: interpolated(event)['subject'],
headline: interpolated(event)['headline'],
body: interpolated(event)['body'],
content_type: interpolated(event)['content_type'],
groups: [present(event.payload)]
).deliver_now
log "Sent mail to #{recipient} with event #{event.id}"
rescue StandardError => e
error("Error sending mail to #{recipient} with event #{event.id}: #{e.message}")
raise
end
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/user_location_agent.rb | app/models/agents/user_location_agent.rb | require 'securerandom'
module Agents
class UserLocationAgent < Agent
cannot_be_scheduled!
gem_dependency_check { defined?(Haversine) }
description do
<<~MD
The User Location Agent creates events based on WebHook POSTS that contain a `latitude` and `longitude`. You can use the [POSTLocation](https://github.com/cantino/post_location) or [PostGPS](https://github.com/chriseidhof/PostGPS) iOS app to post your location to `https://#{ENV['DOMAIN']}/users/#{user.id}/update_location/:secret` where `:secret` is specified in your options.
#{'## Include `haversine` in your Gemfile to use this Agent!' if dependencies_missing?}
If you want to only keep more precise locations, set `max_accuracy` to the upper bound, in meters. The default name for this field is `accuracy`, but you can change this by setting a value for `accuracy_field`.
If you want to require a certain distance traveled, set `min_distance` to the minimum distance, in meters. Note that GPS readings and the measurement itself aren't exact, so don't rely on this for precision filtering.
To view the locations on a map, set `api_key` to your [Google Maps JavaScript API key](https://developers.google.com/maps/documentation/javascript/get-api-key#key).
MD
end
event_description <<~MD
Assuming you're using the iOS application, events look like this:
{
"latitude": "37.12345",
"longitude": "-122.12345",
"timestamp": "123456789.0",
"altitude": "22.0",
"horizontal_accuracy": "5.0",
"vertical_accuracy": "3.0",
"speed": "0.52595",
"course": "72.0703",
"device_token": "..."
}
MD
def working?
event_created_within?(2) && !recent_error_logs?
end
def default_options
{
'secret' => SecureRandom.hex(7),
'max_accuracy' => '',
'min_distance' => '',
'api_key' => '',
}
end
def validate_options
errors.add(:base,
"secret is required and must be longer than 4 characters") unless options['secret'].present? && options['secret'].length > 4
end
def receive(incoming_events)
incoming_events.each do |event|
interpolate_with(event) do
handle_payload event.payload
end
end
end
def receive_web_request(params, method, format)
params = params.symbolize_keys
if method != 'post'
return ['Not Found', 404]
end
if interpolated['secret'] != params[:secret]
return ['Not Authorized', 401]
end
handle_payload params.except(:secret)
['ok', 200]
end
private
def handle_payload(payload)
location = Location.new(payload)
accuracy_field = interpolated[:accuracy_field].presence || "accuracy"
def accurate_enough?(payload, accuracy_field)
!interpolated[:max_accuracy].present? || !payload[accuracy_field] || payload[accuracy_field].to_i < interpolated[:max_accuracy].to_i
end
def far_enough?(payload)
if memory['last_location'].present?
travel = Haversine.distance(
memory['last_location']['latitude'].to_i,
memory['last_location']['longitude'].to_i,
payload['latitude'].to_i,
payload['longitude'].to_i
).to_meters
!interpolated[:min_distance].present? || travel > interpolated[:min_distance].to_i
else # for the first run, before "last_location" exists
true
end
end
if location.present? && accurate_enough?(payload, accuracy_field) && far_enough?(payload)
if interpolated[:max_accuracy].present? && !payload[accuracy_field].present?
log "Accuracy field missing; all locations will be kept"
end
create_event(payload:, location:)
memory["last_location"] = payload
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/data_output_agent.rb | app/models/agents/data_output_agent.rb | module Agents
class DataOutputAgent < Agent
include WebRequestConcern
cannot_be_scheduled!
cannot_create_events!
description do
<<~MD
The Data Output Agent outputs received events as either RSS or JSON. Use it to output a public or private stream of Huginn data.
This Agent will output data at:
`https://#{ENV['DOMAIN']}#{Rails.application.routes.url_helpers.web_requests_path(agent_id: ':id', user_id:, secret: ':secret', format: :xml)}`
where `:secret` is one of the allowed secrets specified in your options and the extension can be `xml` or `json`.
You can setup multiple secrets so that you can individually authorize external systems to
access your Huginn data.
Options:
* `secrets` - An array of tokens that the requestor must provide for light-weight authentication.
* `expected_receive_period_in_days` - How often you expect data to be received by this Agent from other Agents.
* `template` - A JSON object representing a mapping between item output keys and incoming event values. Use [Liquid](https://github.com/huginn/huginn/wiki/Formatting-Events-using-Liquid) to format the values. Values of the `link`, `title`, `description` and `icon` keys will be put into the \\<channel\\> section of RSS output. Value of the `self` key will be used as URL for this feed itself, which is useful when you serve it via reverse proxy. The `item` key will be repeated for every Event. The `pubDate` key for each item will have the creation time of the Event unless given.
* `events_to_show` - The number of events to output in RSS or JSON. (default: `40`)
* `ttl` - A value for the \\<ttl\\> element in RSS output. (default: `60`)
* `ns_dc` - Add [DCMI Metadata Terms namespace](http://purl.org/dc/elements/1.1/) in output xml
* `ns_media` - Add [yahoo media namespace](https://en.wikipedia.org/wiki/Media_RSS) in output xml
* `ns_itunes` - Add [itunes compatible namespace](http://lists.apple.com/archives/syndication-dev/2005/Nov/msg00002.html) in output xml
* `rss_content_type` - Content-Type for RSS output (default: `application/rss+xml`)
* `response_headers` - An object with any custom response headers. (example: `{"Access-Control-Allow-Origin": "*"}`)
* `push_hubs` - Set to a list of PubSubHubbub endpoints you want to publish an update to every time this agent receives an event. (default: none) Popular hubs include [Superfeedr](https://pubsubhubbub.superfeedr.com/) and [Google](https://pubsubhubbub.appspot.com/). Note that publishing updates will make your feed URL known to the public, so if you want to keep it secret, set up a reverse proxy to serve your feed via a safe URL and specify it in `template.self`.
If you'd like to output RSS tags with attributes, such as `enclosure`, use something like the following in your `template`:
"enclosure": {
"_attributes": {
"url": "{{media_url}}",
"length": "1234456789",
"type": "audio/mpeg"
}
},
"another_tag": {
"_attributes": {
"key": "value",
"another_key": "another_value"
},
"_contents": "tag contents (can be an object for nesting)"
}
# Ordering events
#{description_events_order('events')}
DataOutputAgent will select the last `events_to_show` entries of its received events sorted in the order specified by `events_order`, which is defaulted to the event creation time.
So, if you have multiple source agents that may create many events in a run, you may want to either increase `events_to_show` to have a larger "window", or specify the `events_order` option to an appropriate value (like `date_published`) so events from various sources are properly mixed in the resulted feed.
There is also an option `events_list_order` that only controls the order of events listed in the final output, without attempting to maintain a total order of received events. It has the same format as `events_order` and is defaulted to `#{Utils.jsonify(DEFAULT_EVENTS_ORDER['events_list_order'])}` so the selected events are listed in reverse order like most popular RSS feeds list their articles.
# Liquid Templating
In [Liquid](https://github.com/huginn/huginn/wiki/Formatting-Events-using-Liquid) templating, the following variable is available:
* `events`: An array of events being output, sorted in the given order, up to `events_to_show` in number. For example, if source events contain a site title in the `site_title` key, you can refer to it in `template.title` by putting `{{events.first.site_title}}`.
MD
end
def default_options
{
"secrets" => ["a-secret-key"],
"expected_receive_period_in_days" => 2,
"template" => {
"title" => "XKCD comics as a feed",
"description" => "This is a feed of recent XKCD comics, generated by Huginn",
"item" => {
"title" => "{{title}}",
"description" => "Secret hovertext: {{hovertext}}",
"link" => "{{url}}"
}
},
"ns_media" => "true"
}
end
def working?
last_receive_at && last_receive_at > options['expected_receive_period_in_days'].to_i.days.ago && !recent_error_logs?
end
def validate_options
if options['secrets'].is_a?(Array) && options['secrets'].length > 0
options['secrets'].each do |secret|
case secret
when %r{[/.]}
errors.add(:base, "secret may not contain a slash or dot")
when String
else
errors.add(:base, "secret must be a string")
end
end
else
errors.add(:base, "Please specify one or more secrets for 'authenticating' incoming feed requests")
end
unless options['expected_receive_period_in_days'].present? && options['expected_receive_period_in_days'].to_i > 0
errors.add(:base,
"Please provide 'expected_receive_period_in_days' to indicate how many days can pass before this Agent is considered to be not working")
end
unless options['template'].present? && options['template']['item'].present? && options['template']['item'].is_a?(Hash)
errors.add(:base, "Please provide template and template.item")
end
case options['push_hubs']
when nil
when Array
options['push_hubs'].each do |hub|
case hub
when /\{/
# Liquid templating
when String
begin
URI.parse(hub)
rescue URI::Error
errors.add(:base, "invalid URL found in push_hubs")
break
end
else
errors.add(:base, "push_hubs must be an array of endpoint URLs")
break
end
end
else
errors.add(:base, "push_hubs must be an array")
end
end
def events_to_show
(interpolated['events_to_show'].presence || 40).to_i
end
def feed_ttl
(interpolated['ttl'].presence || 60).to_i
end
def feed_title
interpolated['template']['title'].presence || "#{name} Event Feed"
end
def feed_link
interpolated['template']['link'].presence || "https://#{ENV['DOMAIN']}"
end
def feed_url(options = {})
interpolated['template']['self'].presence ||
feed_link + Rails.application.routes.url_helpers.web_requests_path(
agent_id: id || ':id',
user_id:,
secret: options[:secret],
format: options[:format]
)
end
def feed_icon
interpolated['template']['icon'].presence || feed_link + '/favicon.ico'
end
def itunes_icon
if boolify(interpolated['ns_itunes'])
"<itunes:image href=#{feed_icon.encode(xml: :attr)} />"
end
end
def feed_description
interpolated['template']['description'].presence || "A feed of Events received by the '#{name}' Huginn Agent"
end
def rss_content_type
interpolated['rss_content_type'].presence || 'application/rss+xml'
end
def xml_namespace
namespaces = ['xmlns:atom="http://www.w3.org/2005/Atom"']
if boolify(interpolated['ns_dc'])
namespaces << 'xmlns:dc="http://purl.org/dc/elements/1.1/"'
end
if boolify(interpolated['ns_media'])
namespaces << 'xmlns:media="http://search.yahoo.com/mrss/"'
end
if boolify(interpolated['ns_itunes'])
namespaces << 'xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd"'
end
namespaces.join(' ')
end
def push_hubs
interpolated['push_hubs'].presence || []
end
DEFAULT_EVENTS_ORDER = {
'events_order' => nil,
'events_list_order' => [["{{_index_}}", "number", true]],
}
def events_order(key = SortableEvents::EVENTS_ORDER_KEY)
super || DEFAULT_EVENTS_ORDER[key]
end
def latest_events(reload = false)
received_events = received_events().reorder(id: :asc)
events =
if (event_ids = memory[:event_ids]) &&
memory[:events_order] == events_order &&
memory[:events_to_show] >= events_to_show
received_events.where(id: event_ids).to_a
else
memory[:last_event_id] = nil
reload = true
[]
end
if reload
memory[:events_order] = events_order
memory[:events_to_show] = events_to_show
new_events =
if last_event_id = memory[:last_event_id]
received_events.where(Event.arel_table[:id].gt(last_event_id)).to_a
else
source_ids.flat_map { |source_id|
# dig twice as many events as the number of
# `events_to_show`
received_events.where(agent_id: source_id)
.last(2 * events_to_show)
}.sort_by(&:id)
end
unless new_events.empty?
memory[:last_event_id] = new_events.last.id
events.concat(new_events)
end
end
events = sort_events(events).last(events_to_show)
if reload
memory[:event_ids] = events.map(&:id)
end
events
end
def receive_web_request(params, method, format)
unless interpolated['secrets'].include?(params['secret'])
if format =~ /json/
return [{ error: "Not Authorized" }, 401]
else
return ["Not Authorized", 401]
end
end
source_events = sort_events(latest_events, 'events_list_order')
interpolate_with('events' => source_events) do
items = source_events.map do |event|
interpolated = interpolate_options(options['template']['item'], event)
interpolated['guid'] = {
'_attributes' => { 'isPermaLink' => 'false' },
'_contents' => interpolated['guid'].presence || event.id
}
date_string = interpolated['pubDate'].to_s
date =
begin
Time.zone.parse(date_string) # may return nil
rescue StandardError => e
error "Error parsing a \"pubDate\" value \"#{date_string}\": #{e.message}"
nil
end || event.created_at
interpolated['pubDate'] = date.rfc2822.to_s
interpolated
end
now = Time.now
if format =~ /json/
content = {
'title' => feed_title,
'description' => feed_description,
'pubDate' => now,
'items' => simplify_item_for_json(items)
}
return [content, 200, "application/json", interpolated['response_headers'].presence]
else
hub_links = push_hubs.map { |hub|
<<-XML
<atom:link rel="hub" href=#{hub.encode(xml: :attr)}/>
XML
}.join
items = items_to_xml(items)
return [<<~XML, 200, rss_content_type, interpolated['response_headers'].presence]
<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0" #{xml_namespace}>
<channel>
<atom:link href=#{feed_url(secret: params['secret'], format: :xml).encode(xml: :attr)} rel="self" type="application/rss+xml" />
<atom:icon>#{feed_icon.encode(xml: :text)}</atom:icon>
#{itunes_icon}
#{hub_links}
<title>#{feed_title.encode(xml: :text)}</title>
<description>#{feed_description.encode(xml: :text)}</description>
<link>#{feed_link.encode(xml: :text)}</link>
<lastBuildDate>#{now.rfc2822.to_s.encode(xml: :text)}</lastBuildDate>
<pubDate>#{now.rfc2822.to_s.encode(xml: :text)}</pubDate>
<ttl>#{feed_ttl}</ttl>
#{items}
</channel>
</rss>
XML
end
end
end
def receive(incoming_events)
url = feed_url(secret: interpolated['secrets'].first, format: :xml)
# Reload new events and update cache
latest_events(true)
push_hubs.each do |hub|
push_to_hub(hub, url)
end
end
private
class XMLNode
def initialize(tag_name, attributes, contents)
@tag_name = tag_name
@attributes = attributes
@contents = contents
end
def to_xml(options)
if @contents.is_a?(Hash)
options[:builder].tag! @tag_name, @attributes do
@contents.each { |key, value|
ActiveSupport::XmlMini.to_tag(key, value, options.merge(skip_instruct: true))
}
end
else
options[:builder].tag! @tag_name, @attributes, @contents
end
end
end
def simplify_item_for_xml(item)
if item.is_a?(Hash)
item.each.with_object({}) do |(key, value), memo|
memo[key] =
if value.is_a?(Hash)
if value.key?('_attributes') || value.key?('_contents')
XMLNode.new(key, value['_attributes'], simplify_item_for_xml(value['_contents']))
else
simplify_item_for_xml(value)
end
else
value
end
end
elsif item.is_a?(Array)
item.map { |value| simplify_item_for_xml(value) }
else
item
end
end
def simplify_item_for_json(item)
if item.is_a?(Hash)
item.each.with_object({}) do |(key, value), memo|
if value.is_a?(Hash)
if value.key?('_attributes') || value.key?('_contents')
contents =
if value['_contents'] && value['_contents'].is_a?(Hash)
simplify_item_for_json(value['_contents'])
elsif value['_contents']
{ "contents" => value['_contents'] }
else
{}
end
memo[key] = contents.merge(value['_attributes'] || {})
else
memo[key] = simplify_item_for_json(value)
end
else
memo[key] = value
end
end
elsif item.is_a?(Array)
item.map { |value| simplify_item_for_json(value) }
else
item
end
end
def items_to_xml(items)
simplify_item_for_xml(items)
.to_xml(skip_types: true, root: "items", skip_instruct: true, indent: 1)
.gsub(%r{
(?<indent> ^\ + ) < (?<tagname> [^> ]+ ) > \n
(?<children>
(?: \k<indent> \ < \k<tagname> (?:\ [^>]*)? > [^<>]*? </ \k<tagname> > \n )+
)
\k<indent> </ \k<tagname> > \n
}mx) { $~[:children].gsub(/^ /, '') } # delete redundant nesting of array elements
.gsub(%r{
(?<indent> ^\ + ) < [^> ]+ /> \n
}mx, '') # delete empty elements
.gsub(%r{^</?items>\n}, '')
end
def push_to_hub(hub, url)
hub_uri =
begin
URI.parse(hub)
rescue URI::Error
nil
end
if !hub_uri.is_a?(URI::HTTP)
error "Invalid push endpoint: #{hub}"
return
end
log "Pushing #{url} to #{hub_uri}"
return if dry_run?
begin
faraday.post hub_uri, {
'hub.mode' => 'publish',
'hub.url' => url
}
rescue StandardError => e
error "Push failed: #{e.message}"
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/twitter_user_agent.rb | app/models/agents/twitter_user_agent.rb | module Agents
class TwitterUserAgent < Agent
include TwitterConcern
can_dry_run!
cannot_receive_events!
description <<~MD
The Twitter User Agent either follows the timeline of a specific Twitter user or follows your own home timeline including both your tweets and tweets from people whom you are following.
#{twitter_dependencies_missing if dependencies_missing?}
To be able to use this Agent you need to authenticate with Twitter in the [Services](/services) section first.
To follow a Twitter user set `choose_home_time_line` to `false` and provide the `username`.
To follow your own home timeline set `choose_home_time_line` to `true`.
Set `include_retweets` to `false` to not include retweets (default: `true`)
Set `exclude_replies` to `true` to exclude replies (default: `false`)
Set `expected_update_period_in_days` to the maximum amount of time that you'd expect to pass between Events being created by this Agent.
Set `starting_at` to the date/time (eg. `Mon Jun 02 00:38:12 +0000 2014`) you want to start receiving tweets from (default: agent's `created_at`)
MD
event_description <<~MD
Events are the raw JSON provided by the [Twitter API v1.1](https://dev.twitter.com/docs/api/1.1/get/statuses/user_timeline) with slight modifications. They should look something like this:
#{tweet_event_description('full_text')}
MD
default_schedule "every_1h"
def working?
event_created_within?(interpolated['expected_update_period_in_days']) && !recent_error_logs?
end
def default_options
{
'username' => 'tectonic',
'include_retweets' => 'true',
'exclude_replies' => 'false',
'expected_update_period_in_days' => '2',
'choose_home_time_line' => 'false'
}
end
def validate_options
if options[:expected_update_period_in_days].blank?
errors.add(:base, "expected_update_period_in_days is required")
end
if !boolify(options[:choose_home_time_line]) && options[:username].blank?
errors.add(:base, "username is required")
end
if options[:include_retweets].present? && !%w[true false].include?(options[:include_retweets].to_s)
errors.add(:base, "include_retweets must be a boolean value (true/false)")
end
if options[:starting_at].present?
begin
Time.parse(options[:starting_at])
rescue StandardError
errors.add(:base, "Error parsing starting_at")
end
end
end
def check
opts = {
count: 200,
include_rts: include_retweets?,
exclude_replies: exclude_replies?,
include_entities: true,
contributor_details: true,
tweet_mode: 'extended',
since_id: memory[:since_id].presence,
}.compact
tweets =
if choose_home_time_line?
twitter.home_timeline(opts)
else
twitter.user_timeline(interpolated[:username], opts)
end
tweets.sort_by(&:id).each do |tweet|
next unless tweet.created_at >= starting_at
memory[:since_id] = [tweet.id, *memory[:since_id]].max
create_event(payload: format_tweet(tweet))
end
end
private
def starting_at
if interpolated[:starting_at].present?
begin
Time.parse(interpolated[:starting_at])
rescue StandardError
end
end || created_at || Time.now # for dry-running
end
def choose_home_time_line?
boolify(interpolated[:choose_home_time_line])
end
def include_retweets?
# default to true
boolify(interpolated[:include_retweets]) != false
end
def exclude_replies?
# default to false
boolify(interpolated[:exclude_replies]) || false
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/json_parse_agent.rb | app/models/agents/json_parse_agent.rb | module Agents
class JsonParseAgent < Agent
include FormConfigurable
cannot_be_scheduled!
can_dry_run!
description <<~MD
The JSON Parse Agent parses a JSON string and emits the data in a new event or merge with with the original event.
`data` is the JSON to parse. Use [Liquid](https://github.com/huginn/huginn/wiki/Formatting-Events-using-Liquid) templating to specify the JSON string.
`data_key` sets the key which contains the parsed JSON data in emitted events
`mode` determines whether create a new `clean` event or `merge` old payload with new values (default: `clean`)
MD
def default_options
{
'data' => '{{ data }}',
'data_key' => 'data',
'mode' => 'clean',
}
end
event_description do
"Events will looks like this:\n\n %s" % Utils.pretty_print(interpolated['data_key'] => { parsed: 'object' })
end
form_configurable :data
form_configurable :data_key
form_configurable :mode, type: :array, values: ['clean', 'merge']
def validate_options
errors.add(:base, "data needs to be present") if options['data'].blank?
errors.add(:base, "data_key needs to be present") if options['data_key'].blank?
if options['mode'].present? && !options['mode'].to_s.include?('{{') && !%(clean merge).include?(options['mode'].to_s)
errors.add(:base, "mode must be 'clean' or 'merge'")
end
end
def working?
received_event_without_error?
end
def receive(incoming_events)
incoming_events.each do |event|
mo = interpolated(event)
existing_payload = mo['mode'].to_s == 'merge' ? event.payload : {}
create_event payload: existing_payload.merge({ mo['data_key'] => JSON.parse(mo['data']) })
rescue JSON::JSONError => e
error("Could not parse JSON: #{e.class} '#{e.message}'")
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/weather_agent.rb | app/models/agents/weather_agent.rb | require 'date'
require 'cgi'
module Agents
class WeatherAgent < Agent
cannot_receive_events!
gem_dependency_check { defined?(ForecastIO) }
description <<~MD
The Weather Agent creates an event for the day's weather at a given `location`.
#{'## Include `forecast_io` in your Gemfile to use this Agent!' if dependencies_missing?}
You also must select when you would like to get the weather forecast for using the `which_day` option, where the number 1 represents today, 2 represents tomorrow and so on. Weather forecast inforation is only returned for at most one week at a time.
The weather forecast information is provided by Pirate Weather, a drop-in replacement for the Dark Sky API (which no longer has a free tier).
The `location` must be a comma-separated string of map co-ordinates (longitude, latitude). For example, San Francisco would be `37.7771,-122.4196`.
You must set up an [API key for Pirate Weather](https://pirate-weather.apiable.io/) in order to use this Agent.
Set `expected_update_period_in_days` to the maximum amount of time that you'd expect to pass between Events being created by this Agent.
MD
event_description <<~MD
Events look like this:
{
"location": "12345",
"date": {
"epoch": "1357959600",
"pretty": "10:00 PM EST on January 11, 2013"
},
"high": {
"fahrenheit": "64",
"celsius": "18"
},
"low": {
"fahrenheit": "52",
"celsius": "11"
},
"conditions": "Rain Showers",
"icon": "rain",
"icon_url": "https://icons-ak.wxug.com/i/c/k/rain.gif",
"skyicon": "mostlycloudy",
...
}
MD
default_schedule "8pm"
def working?
event_created_within?((interpolated['expected_update_period_in_days'].presence || 2).to_i) && !recent_error_logs? && key_setup?
end
def key_setup?
interpolated['api_key'].present? && interpolated['api_key'] != "your-key" && interpolated['api_key'] != "put-your-key-here"
end
def default_options
{
'api_key' => 'your-key',
'location' => '37.779329,-122.41915',
'which_day' => '1',
'expected_update_period_in_days' => '2',
'language' => 'en'
}
end
def check
if key_setup?
create_event payload: model(which_day).merge('location' => location)
end
end
private
def which_day
(interpolated["which_day"].presence || 1).to_i
end
def location
interpolated["location"].presence || interpolated["zipcode"]
end
def coordinates
location.split(',').map { |e| e.to_f }
end
def language
interpolated["language"].presence || "en"
end
def wunderground?
interpolated["service"].presence && interpolated["service"].presence.downcase == "wunderground"
end
def darksky?
interpolated["service"].presence && interpolated["service"].presence.downcase == "darksky"
end
VALID_COORDS_REGEX = /^\s*-?\d{1,3}\.\d+\s*,\s*-?\d{1,3}\.\d+\s*$/
def validate_location
errors.add(:base, "location is required") unless location.present?
if location =~ VALID_COORDS_REGEX
lat, lon = coordinates
errors.add :base, "too low of a latitude" unless lat > -90
errors.add :base, "too big of a latitude" unless lat < 90
errors.add :base, "too low of a longitude" unless lon > -180
errors.add :base, "too high of a longitude" unless lon < 180
else
errors.add(
:base,
"Location #{location} is malformed. Location for " +
'Pirate Weather must be in the format "-00.000,-00.00000". The ' +
"number of decimal places does not matter."
)
end
end
def validate_options
errors.add(:base,
"The Weather Underground API has been disabled since Jan 1st 2018, please switch to Pirate Weather") if wunderground?
errors.add(:base, "The Dark Sky API has been disabled since March 31, 2023, please switch to Pirate Weather") if darksky?
validate_location
errors.add(:base, "api_key is required") unless interpolated['api_key'].present?
errors.add(:base, "which_day selection is required") unless which_day.present?
end
def pirate_weather
if key_setup?
ForecastIO.api_key = interpolated['api_key']
lat, lng = coordinates
ForecastIO.forecast(lat, lng, params: { lang: language.downcase })['daily']['data']
end
end
def model(which_day)
value = pirate_weather[which_day - 1]
if value
timestamp = Time.at(value.time)
{
'date' => {
'epoch' => value.time.to_s,
'pretty' => timestamp.strftime("%l:%M %p %Z on %B %d, %Y"),
'day' => timestamp.day,
'month' => timestamp.month,
'year' => timestamp.year,
'yday' => timestamp.yday,
'hour' => timestamp.hour,
'min' => timestamp.strftime("%M"),
'sec' => timestamp.sec,
'isdst' => timestamp.isdst ? 1 : 0,
'monthname' => timestamp.strftime("%B"),
'monthname_short' => timestamp.strftime("%b"),
'weekday_short' => timestamp.strftime("%a"),
'weekday' => timestamp.strftime("%A"),
'ampm' => timestamp.strftime("%p"),
'tz_short' => timestamp.zone
},
'period' => which_day.to_i,
'high' => {
'fahrenheit' => value.temperatureMax.round.to_s,
'epoch' => value.temperatureMaxTime.to_s,
'fahrenheit_apparent' => value.apparentTemperatureMax.round.to_s,
'epoch_apparent' => value.apparentTemperatureMaxTime.to_s,
'celsius' => ((5 * (Float(value.temperatureMax) - 32)) / 9).round.to_s
},
'low' => {
'fahrenheit' => value.temperatureMin.round.to_s,
'epoch' => value.temperatureMinTime.to_s,
'fahrenheit_apparent' => value.apparentTemperatureMin.round.to_s,
'epoch_apparent' => value.apparentTemperatureMinTime.to_s,
'celsius' => ((5 * (Float(value.temperatureMin) - 32)) / 9).round.to_s
},
'conditions' => value.summary,
'icon' => value.icon,
'avehumidity' => (value.humidity * 100).to_i,
'sunriseTime' => value.sunriseTime.to_s,
'sunsetTime' => value.sunsetTime.to_s,
'moonPhase' => value.moonPhase.to_s,
'precip' => {
'intensity' => value.precipIntensity.to_s,
'intensity_max' => value.precipIntensityMax.to_s,
'intensity_max_epoch' => value.precipIntensityMaxTime.to_s,
'probability' => value.precipProbability.to_s,
'type' => value.precipType
},
'dewPoint' => value.dewPoint.to_s,
'avewind' => {
'mph' => value.windSpeed.round.to_s,
'kph' => (Float(value.windSpeed) * 1.609344).round.to_s,
'degrees' => value.windBearing.to_s
},
'visibility' => value.visibility.to_s,
'cloudCover' => value.cloudCover.to_s,
'pressure' => value.pressure.to_s,
'ozone' => value.ozone.to_s
}
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/gap_detector_agent.rb | app/models/agents/gap_detector_agent.rb | module Agents
class GapDetectorAgent < Agent
default_schedule "every_10m"
description <<~MD
The Gap Detector Agent will watch for holes or gaps in a stream of incoming Events and generate "no data alerts".
The `value_path` value is a [JSONPath](http://goessner.net/articles/JsonPath/) to a value of interest. If either
this value is empty, or no Events are received, during `window_duration_in_days`, an Event will be created with
a payload of `message`.
MD
event_description <<~MD
Events look like:
{
"message": "No data has been received!",
"gap_started_at": "1234567890"
}
MD
def validate_options
unless options['message'].present?
errors.add(:base, "message is required")
end
unless options['window_duration_in_days'].present? && options['window_duration_in_days'].to_f > 0
errors.add(:base, "window_duration_in_days must be provided as an integer or floating point number")
end
end
def default_options
{
'window_duration_in_days' => "2",
'message' => "No data has been received!"
}
end
def working?
true
end
def receive(incoming_events)
incoming_events.sort_by(&:created_at).each do |event|
memory['newest_event_created_at'] ||= 0
if !interpolated['value_path'].present? || Utils.value_at(event.payload, interpolated['value_path']).present?
if event.created_at.to_i > memory['newest_event_created_at']
memory['newest_event_created_at'] = event.created_at.to_i
memory.delete('alerted_at')
end
end
end
end
def check
window = interpolated['window_duration_in_days'].to_f.days.ago
if memory['newest_event_created_at'].present? && Time.at(memory['newest_event_created_at']) < window
unless memory['alerted_at']
memory['alerted_at'] = Time.now.to_i
create_event payload: {
message: interpolated['message'],
gap_started_at: memory['newest_event_created_at']
}
end
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/shell_command_agent.rb | app/models/agents/shell_command_agent.rb | module Agents
class ShellCommandAgent < Agent
default_schedule "never"
can_dry_run!
no_bulk_receive!
def self.should_run?
ENV['ENABLE_INSECURE_AGENTS'] == "true"
end
description <<~MD
The Shell Command Agent will execute commands on your local system, returning the output.
`command` specifies the command (either a shell command line string or an array of command line arguments) to be executed, and `path` will tell ShellCommandAgent in what directory to run this command. The content of `stdin` will be fed to the command via the standard input.
`expected_update_period_in_days` is used to determine if the Agent is working.
ShellCommandAgent can also act upon received events. When receiving an event, this Agent's options can interpolate values from the incoming event.
For example, your command could be defined as `{{cmd}}`, in which case the event's `cmd` property would be used.
The resulting event will contain the `command` which was executed, the `path` it was executed under, the `exit_status` of the command, the `errors`, and the actual `output`. ShellCommandAgent will not log an error if the result implies that something went wrong.
If `unbundle` is set to true, the command is run in a clean environment, outside of Huginn's bundler context.
If `suppress_on_failure` is set to true, no event is emitted when `exit_status` is not zero.
If `suppress_on_empty_output` is set to true, no event is emitted when `output` is empty.
*Warning*: This type of Agent runs arbitrary commands on your system, #{Agents::ShellCommandAgent.should_run? ? "but is **currently enabled**" : "and is **currently disabled**"}.
Only enable this Agent if you trust everyone using your Huginn installation.
You can enable this Agent in your .env file by setting `ENABLE_INSECURE_AGENTS` to `true`.
MD
event_description <<~MD
Events look like this:
{
"command": "pwd",
"path": "/home/Huginn",
"exit_status": 0,
"errors": "",
"output": "/home/Huginn"
}
MD
def default_options
{
'path' => "/",
'command' => "pwd",
'unbundle' => false,
'suppress_on_failure' => false,
'suppress_on_empty_output' => false,
'expected_update_period_in_days' => 1
}
end
def validate_options
unless options['path'].present? && options['command'].present? && options['expected_update_period_in_days'].present?
errors.add(:base, "The path, command, and expected_update_period_in_days fields are all required.")
end
case options['stdin']
when String, nil
else
errors.add(:base, "stdin must be a string.")
end
unless Array(options['command']).all? { |o| o.is_a?(String) }
errors.add(:base, "command must be a shell command line string or an array of command line arguments.")
end
unless File.directory?(interpolated['path'])
errors.add(:base, "#{options['path']} is not a real directory.")
end
end
def working?
Agents::ShellCommandAgent.should_run? && event_created_within?(interpolated['expected_update_period_in_days']) && !recent_error_logs?
end
def receive(incoming_events)
incoming_events.each do |event|
handle(interpolated(event), event)
end
end
def check
handle(interpolated)
end
private
def handle(opts, event = nil)
if Agents::ShellCommandAgent.should_run?
command = opts['command']
path = opts['path']
stdin = opts['stdin']
result, errors, exit_status = run_command(path, command, stdin, **interpolated.slice(:unbundle).symbolize_keys)
payload = {
'command' => command,
'path' => path,
'exit_status' => exit_status,
'errors' => errors,
'output' => result,
}
unless suppress_event?(payload)
created_event = create_event(payload:)
end
log("Ran '#{command}' under '#{path}'", outbound_event: created_event, inbound_event: event)
else
log("Unable to run because insecure agents are not enabled. Edit ENABLE_INSECURE_AGENTS in the Huginn .env configuration.")
end
end
def run_command(path, command, stdin, unbundle: false)
if unbundle
return Bundler.with_original_env {
run_command(path, command, stdin)
}
end
begin
rout, wout = IO.pipe
rerr, werr = IO.pipe
rin, win = IO.pipe
pid = spawn(*command, chdir: path, out: wout, err: werr, in: rin)
wout.close
werr.close
rin.close
if stdin
win.write stdin
win.close
end
(result = rout.read).strip!
(errors = rerr.read).strip!
_, status = Process.wait2(pid)
exit_status = status.exitstatus
rescue StandardError => e
errors = e.to_s
result = ''.freeze
exit_status = nil
end
[result, errors, exit_status]
end
def suppress_event?(payload)
(boolify(interpolated['suppress_on_failure']) && payload['exit_status'].nonzero?) ||
(boolify(interpolated['suppress_on_empty_output']) && payload['output'].empty?)
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/mqtt_agent.rb | app/models/agents/mqtt_agent.rb | require "json"
module Agents
class MqttAgent < Agent
gem_dependency_check { defined?(MQTT) }
description <<~MD
The MQTT Agent allows both publication and subscription to an MQTT topic.
#{'## Include `mqtt` in your Gemfile to use this Agent!' if dependencies_missing?}
MQTT is a generic transport protocol for machine to machine communication.
You can do things like:
* Publish to [RabbitMQ](http://www.rabbitmq.com/mqtt.html)
* Run [OwnTracks, a location tracking tool](http://owntracks.org/) for iOS and Android
* Subscribe to your home automation setup like [Ninjablocks](http://forums.ninjablocks.com/index.php?p=/discussion/661/today-i-learned-about-mqtt/p1) or [TheThingSystem](http://thethingsystem.com/dev/supported-things.html)
Simply choose a topic (think email subject line) to publish/listen to, and configure your service.
It's easy to setup your own [broker](http://jpmens.net/2013/09/01/installing-mosquitto-on-a-raspberry-pi/) or connect to a [cloud service](http://www.cloudmqtt.com)
Hints:
Many services run mqtts (mqtt over SSL) often with a custom certificate.
You'll want to download their cert and install it locally, specifying the ```certificate_path``` configuration.
Example configuration:
<pre><code>{
'uri' => 'mqtts://user:pass@localhost:8883'
'ssl' => :TLSv1,
'ca_file' => './ca.pem',
'cert_file' => './client.crt',
'key_file' => './client.key',
'topic' => 'huginn'
}
</code></pre>
Subscribe to CloCkWeRX's TheThingSystem instance (thethingsystem.com), where
temperature and other events are being published.
<pre><code>{
'uri' => 'mqtt://kcqlmkgx:sVNoccqwvXxE@m10.cloudmqtt.com:13858',
'topic' => 'the_thing_system/demo'
}
</code></pre>
Subscribe to all topics
<pre><code>{
'uri' => 'mqtt://kcqlmkgx:sVNoccqwvXxE@m10.cloudmqtt.com:13858',
'topic' => '/#'
}
</code></pre>
Find out more detail on [subscription wildcards](http://www.eclipse.org/paho/files/mqttdoc/Cclient/wildcard.html)
MD
event_description <<~MD
Events are simply nested MQTT payloads. For example, an MQTT payload for Owntracks
{
"topic": "owntracks/kcqlmkgx/Dan",
"message": {"_type": "location", "lat": "-34.8493644", "lon": "138.5218119", "tst": "1401771049", "acc": "50.0", "batt": "31", "desc": "Home", "event": "enter"},
"time": 1401771051
}
MD
def validate_options
unless options['uri'].present? &&
options['topic'].present?
errors.add(:base, "topic and uri are required")
end
end
def working?
(event_created_within?(interpolated['expected_update_period_in_days']) && !recent_error_logs?) || received_event_without_error?
end
def default_options
{
'uri' => 'mqtts://user:pass@localhost:8883',
'ssl' => :TLSv1,
'ca_file' => './ca.pem',
'cert_file' => './client.crt',
'key_file' => './client.key',
'topic' => 'huginn',
'max_read_time' => '10',
'expected_update_period_in_days' => '2'
}
end
def mqtt_client
@client ||= MQTT::Client.new(interpolated['uri']).tap { |c|
if interpolated['ssl']
c.ssl = interpolated['ssl'].to_sym
c.ca_file = interpolated['ca_file']
c.cert_file = interpolated['cert_file']
c.key_file = interpolated['key_file']
end
}
end
def receive(incoming_events)
mqtt_client.connect do |c|
incoming_events.each do |event|
c.publish(interpolated(event)['topic'], event.payload['message'])
end
end
end
def check
last_message = memory['last_message']
mqtt_client.connect
poll_thread = Thread.new do
mqtt_client.get_packet(interpolated['topic']) do |packet|
topic, payload = message = [packet.topic, packet.payload]
# Ignore a message if it is previously received
next if (packet.retain || packet.duplicate) && message == last_message
last_message = message
# A lot of services generate JSON, so try that.
begin
payload = JSON.parse(payload)
rescue StandardError
end
create_event payload: {
'topic' => topic,
'message' => payload,
'time' => Time.now.to_i
}
end
end
sleep (interpolated['max_read_time'].presence || 15).to_f
mqtt_client.disconnect
poll_thread.kill
# Remember the last original (non-retain, non-duplicate) message
self.memory['last_message'] = last_message
save!
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/rss_agent.rb | app/models/agents/rss_agent.rb | module Agents
class RssAgent < Agent
include WebRequestConcern
cannot_receive_events!
can_dry_run!
default_schedule "every_1d"
gem_dependency_check { defined?(Feedjira) }
DEFAULT_EVENTS_ORDER = [['{{date_published}}', 'time'], ['{{last_updated}}', 'time']]
description do
<<~MD
The RSS Agent consumes RSS feeds and emits events when they change.
This agent, using [Feedjira](https://github.com/feedjira/feedjira) as a base, can parse various types of RSS and Atom feeds and has some special handlers for FeedBurner, iTunes RSS, and so on. However, supported fields are limited by its general and abstract nature. For complex feeds with additional field types, we recommend using a WebsiteAgent. See [this example](https://github.com/huginn/huginn/wiki/Agent-configuration-examples#itunes-trailers).
If you want to *output* an RSS feed, use the DataOutputAgent.
Options:
* `url` - The URL of the RSS feed (an array of URLs can also be used; items with identical guids across feeds will be considered duplicates).
* `include_feed_info` - Set to `true` to include feed information in each event.
* `clean` - Set to `true` to sanitize `description` and `content` as HTML fragments, removing unknown/unsafe elements and attributes.
* `expected_update_period_in_days` - How often you expect this RSS feed to change. If more than this amount of time passes without an update, the Agent will mark itself as not working.
* `headers` - When present, it should be a hash of headers to send with the request.
* `basic_auth` - Specify HTTP basic auth parameters: `"username:password"`, or `["username", "password"]`.
* `disable_ssl_verification` - Set to `true` to disable ssl verification.
* `disable_url_encoding` - Set to `true` to disable url encoding.
* `force_encoding` - Set `force_encoding` to an encoding name if the website is known to respond with a missing, invalid or wrong charset in the Content-Type header. Note that a text content without a charset is taken as encoded in UTF-8 (not ISO-8859-1).
* `user_agent` - A custom User-Agent name (default: "Faraday v#{Faraday::VERSION}").
* `max_events_per_run` - Limit number of events created (items parsed) per run for feed.
* `remembered_id_count` - Number of IDs to keep track of and avoid re-emitting (default: 500).
# Ordering Events
#{description_events_order}
In this Agent, the default value for `events_order` is `#{DEFAULT_EVENTS_ORDER.to_json}`.
MD
end
def default_options
{
'expected_update_period_in_days' => "5",
'clean' => 'false',
'url' => "https://github.com/huginn/huginn/commits/master.atom"
}
end
event_description <<~MD
Events look like:
{
"feed": {
"id": "...",
"type": "atom",
"generator": "...",
"url": "http://example.com/",
"links": [
{ "href": "http://example.com/", "rel": "alternate", "type": "text/html" },
{ "href": "http://example.com/index.atom", "rel": "self", "type": "application/atom+xml" }
],
"title": "Some site title",
"description": "Some site description",
"copyright": "...",
"icon": "http://example.com/icon.png",
"authors": [ "..." ],
"itunes_block": "no",
"itunes_categories": [
"Technology", "Gadgets",
"TV & Film",
"Arts", "Food"
],
"itunes_complete": "yes",
"itunes_explicit": "yes",
"itunes_image": "http://...",
"itunes_new_feed_url": "http://...",
"itunes_owners": [ "John Doe <john.doe@example.com>" ],
"itunes_subtitle": "...",
"itunes_summary": "...",
"language": "en-US",
"date_published": "2014-09-11T01:30:00-07:00",
"last_updated": "2014-09-11T01:30:00-07:00"
},
"id": "829f845279611d7925146725317b868d",
"url": "http://example.com/...",
"urls": [ "http://example.com/..." ],
"links": [
{ "href": "http://example.com/...", "rel": "alternate" },
],
"title": "Some title",
"description": "Some description",
"content": "Some content",
"authors": [ "Some Author <email@address>" ],
"categories": [ "..." ],
"image": "http://example.com/...",
"enclosure": {
"url" => "http://example.com/file.mp3", "type" => "audio/mpeg", "length" => "123456789"
},
"itunes_block": "no",
"itunes_closed_captioned": "yes",
"itunes_duration": "04:34",
"itunes_explicit": "yes",
"itunes_image": "http://...",
"itunes_order": "1",
"itunes_subtitle": "...",
"itunes_summary": "...",
"date_published": "2014-09-11T01:30:00-0700",
"last_updated": "2014-09-11T01:30:00-0700"
}
Some notes:
- The `feed` key is present only if `include_feed_info` is set to true.
- The keys starting with `itunes_`, and `language` are only present when the feed is a podcast. See [Podcasts Connect Help](https://help.apple.com/itc/podcasts_connect/#/itcb54353390) for details.
- Each element in `authors` and `itunes_owners` is a string normalized in the format "*name* <*email*> (*url*)", where each space-separated part is optional.
- Timestamps are converted to the ISO 8601 format.
MD
def working?
event_created_within?((interpolated['expected_update_period_in_days'].presence || 10).to_i) && !recent_error_logs?
end
def validate_options
errors.add(:base, "url is required") unless options['url'].present?
unless options['expected_update_period_in_days'].present? && options['expected_update_period_in_days'].to_i > 0
errors.add(:base,
"Please provide 'expected_update_period_in_days' to indicate how many days can pass without an update before this Agent is considered to not be working")
end
if options['remembered_id_count'].present? && options['remembered_id_count'].to_i < 1
errors.add(:base,
"Please provide 'remembered_id_count' as a number bigger than 0 indicating how many IDs should be saved to distinguish between new and old IDs in RSS feeds. Delete option to use default (500).")
end
validate_web_request_options!
validate_events_order
end
def events_order(key = SortableEvents::EVENTS_ORDER_KEY)
if key == SortableEvents::EVENTS_ORDER_KEY
super.presence || DEFAULT_EVENTS_ORDER
else
raise ArgumentError, "unsupported key: #{key}"
end
end
def check
check_urls(Array(interpolated['url']))
end
protected
def check_urls(urls)
new_events = []
max_events = (interpolated['max_events_per_run'].presence || 0).to_i
urls.each do |url|
response = faraday.get(url)
if response.success?
feed = Feedjira.parse(preprocessed_body(response))
new_events.concat feed_to_events(feed)
else
error "Failed to fetch #{url}: #{response.inspect}"
end
rescue StandardError => e
error "Failed to fetch #{url} with message '#{e.message}': #{e.backtrace}"
end
events = sort_events(new_events).select.with_index { |event, index|
check_and_track(event.payload[:id]) &&
!(max_events && max_events > 0 && index >= max_events)
}
create_events(events)
log "Fetched #{urls.to_sentence} and created #{events.size} event(s)."
end
def remembered_id_count
(options['remembered_id_count'].presence || 500).to_i
end
def check_and_track(entry_id)
memory['seen_ids'] ||= []
if memory['seen_ids'].include?(entry_id)
false
else
memory['seen_ids'].unshift entry_id
memory['seen_ids'].pop(memory['seen_ids'].length - remembered_id_count) if memory['seen_ids'].length > remembered_id_count
true
end
end
unless dependencies_missing?
require 'feedjira_extension'
end
def preprocessed_body(response)
body = response.body
case body.encoding
when Encoding::ASCII_8BIT
# Encoding is unknown from the Content-Type, so let the SAX
# parser detect it from the content.
else
# Encoding is already known, so do not let the parser detect
# it from the XML declaration in the content.
body.sub!(/(?<noenc>\A\u{FEFF}?\s*<\?xml(?:\s+\w+(?<av>\s*=\s*(?:'[^']*'|"[^"]*")))*?)\s+encoding\g<av>/,
'\\k<noenc>')
end
body
end
def feed_data(feed)
type =
case feed.class.name
when /Atom/
'atom'
else
'rss'
end
{
id: feed.feed_id,
type:,
url: feed.url,
links: feed.links,
title: feed.title,
description: feed.description,
copyright: feed.copyright,
generator: feed.generator,
icon: feed.icon,
authors: feed.authors,
date_published: feed.date_published,
last_updated: feed.last_updated,
**itunes_feed_data(feed)
}
end
def itunes_feed_data(feed)
data = {}
case feed
when Feedjira::Parser::ITunesRSS
%i[
itunes_block
itunes_categories
itunes_complete
itunes_explicit
itunes_image
itunes_new_feed_url
itunes_owners
itunes_subtitle
itunes_summary
language
].each { |attr|
next unless value = feed.try(attr).presence
data[attr] =
case attr
when :itunes_summary
clean_fragment(value)
else
value
end
}
end
data
end
def entry_data(entry)
{
id: entry.id,
url: entry.url,
urls: Array(entry.url) | entry.links.map(&:href),
links: entry.links,
title: entry.title,
description: clean_fragment(entry.summary),
content: clean_fragment(entry.content || entry.summary),
image: entry.try(:image),
enclosure: entry.enclosure,
authors: entry.authors,
categories: Array(entry.try(:categories)),
date_published: entry.date_published,
last_updated: entry.last_updated,
**itunes_entry_data(entry)
}
end
def itunes_entry_data(entry)
data = {}
case entry
when Feedjira::Parser::ITunesRSSItem
%i[
itunes_block
itunes_closed_captioned
itunes_duration
itunes_explicit
itunes_image
itunes_order
itunes_subtitle
itunes_summary
].each { |attr|
if value = entry.try(attr).presence
data[attr] = value
end
}
end
data
end
def feed_to_events(feed)
payload_base = {}
if boolify(interpolated['include_feed_info'])
payload_base[:feed] = feed_data(feed)
end
feed.entries.map { |entry|
Event.new(payload: payload_base.merge(entry_data(entry)))
}
end
def clean_fragment(fragment)
if boolify(interpolated['clean']) && fragment.present?
Loofah.scrub_fragment(fragment, :prune).to_s
else
fragment
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/weibo_publish_agent.rb | app/models/agents/weibo_publish_agent.rb | module Agents
class WeiboPublishAgent < Agent
include WeiboConcern
cannot_be_scheduled!
description <<~MD
The Weibo Publish Agent publishes tweets from the events it receives.
#{'## Include `weibo_2` in your Gemfile to use this Agent!' if dependencies_missing?}
You must first set up a Weibo app and generate an `access_token` for the user that will be used for posting status updates.
You'll use that `access_token`, along with the `app_key` and `app_secret` for your Weibo app. You must also include the Weibo User ID (as `uid`) of the person to publish as.
You must also specify a `message_path` parameter: a [JSONPaths](http://goessner.net/articles/JsonPath/) to the value to tweet.
You can also specify a `pic_path` parameter: a [JSONPaths](http://goessner.net/articles/JsonPath/) to the picture url to tweet along.
Set `expected_update_period_in_days` to the maximum amount of time that you'd expect to pass between Events being created by this Agent.
MD
def validate_options
unless options['uid'].present? &&
options['expected_update_period_in_days'].present?
errors.add(:base, "expected_update_period_in_days and uid are required")
end
end
def working?
event_created_within?(interpolated['expected_update_period_in_days']) && most_recent_event && most_recent_event.payload['success'] == true && !recent_error_logs?
end
def default_options
{
'uid' => "",
'access_token' => "---",
'app_key' => "---",
'app_secret' => "---",
'expected_update_period_in_days' => "10",
'message_path' => "text",
'pic_path' => "pic"
}
end
def receive(incoming_events)
# if there are too many, dump a bunch to avoid getting rate limited
if incoming_events.count > 20
incoming_events = incoming_events.first(20)
end
incoming_events.each do |event|
tweet_text = Utils.value_at(event.payload, interpolated(event)['message_path'])
pic_url = Utils.value_at(event.payload, interpolated(event)['pic_path'])
if event.agent.type == "Agents::TwitterUserAgent"
tweet_text = unwrap_tco_urls(tweet_text, event.payload)
end
begin
if valid_image?(pic_url)
publish_tweet_with_pic tweet_text, pic_url
else
publish_tweet tweet_text
end
create_event payload: {
'success' => true,
'published_tweet' => tweet_text,
'published_pic' => pic_url,
'agent_id' => event.agent_id,
'event_id' => event.id
}
rescue OAuth2::Error => e
create_event payload: {
'success' => false,
'error' => e.message,
'failed_tweet' => tweet_text,
'failed_pic' => pic_url,
'agent_id' => event.agent_id,
'event_id' => event.id
}
end
# you can't tweet too fast, give it a minute, i mean... 10 seconds
sleep 10 if incoming_events.length > 1
end
end
def publish_tweet(text)
weibo_client.statuses.update text
end
def publish_tweet_with_pic(text, pic)
weibo_client.statuses.upload text, open(pic)
end
def valid_image?(url)
url = URI.parse(url)
http = Net::HTTP.new(url.host, url.port)
http.use_ssl = (url.scheme == "https")
http.start do |http|
# images supported #http://open.weibo.com/wiki/2/statuses/upload
return ['image/gif', 'image/jpeg', 'image/png'].include? http.head(url.request_uri)['Content-Type']
end
rescue StandardError => e
false
end
def unwrap_tco_urls(text, tweet_json)
tweet_json[:entities][:urls].each do |url|
text.gsub! url[:url], url[:expanded_url]
end
text
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/scheduler_agent.rb | app/models/agents/scheduler_agent.rb | require 'fugit'
module Agents
class SchedulerAgent < Agent
include AgentControllerConcern
cannot_be_scheduled!
cannot_receive_events!
cannot_create_events!
@@second_precision_enabled = ENV['ENABLE_SECOND_PRECISION_SCHEDULE'] == 'true'
cattr_reader :second_precision_enabled
description <<~MD
The Scheduler Agent periodically takes an action on target Agents according to a user-defined schedule.
# Action types
Set `action` to one of the action types below:
* `run`: Target Agents are run at intervals, except for those disabled.
* `disable`: Target Agents are disabled (if not) at intervals.
* `enable`: Target Agents are enabled (if not) at intervals.
* If the option `drop_pending_events` is set to `true`, pending events will be cleared before the agent is enabled.
# Targets
Select Agents that you want to run periodically by this SchedulerAgent.
# Schedule
Set `schedule` to a schedule specification in the [cron](http://en.wikipedia.org/wiki/Cron) format.
For example:
* `0 22 * * 1-5`: every day of the week at 22:00 (10pm)
* `*/10 8-11 * * *`: every 10 minutes from 8:00 to and not including 12:00
This variant has several extensions as explained below.
## Timezones
You can optionally specify a timezone (default: `#{Time.zone.name}`) after the day-of-week field using the labels in the [tz database](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)
* `0 22 * * 1-5 Europe/Paris`: every day of the week when it's 22:00 in Paris
* `0 22 * * 1-5 Etc/GMT+2`: every day of the week when it's 22:00 in GMT+2
## Seconds
You can optionally specify seconds before the minute field.
* `*/30 * * * * *`: every 30 seconds
#{"Only multiples of fifteen are allowed as values for the seconds field, i.e. `*/15`, `*/30`, `15,45` etc." unless second_precision_enabled}
## Last day of month
`L` signifies "last day of month" in `day-of-month`.
* `0 22 L * *`: every month on the last day at 22:00
## Weekday names
You can use three letter names instead of numbers in the `weekdays` field.
* `0 22 * * Sat,Sun`: every Saturday and Sunday, at 22:00
## Nth weekday of the month
You can specify "nth weekday of the month" like this.
* `0 22 * * Sun#1,Sun#2`: every first and second Sunday of the month, at 22:00
* `0 22 * * Sun#L1`: every last Sunday of the month, at 22:00
MD
def default_options
super.update({
'schedule' => '0 * * * *',
})
end
def working?
true
end
def validate_options
if (spec = options['schedule']).present?
begin
cron = Fugit::Cron.new(spec) or raise ArgumentError
unless second_precision_enabled || (cron.seconds - [0, 15, 30, 45, 60]).empty?
errors.add(:base, "second precision schedule is not allowed in this service")
end
rescue ArgumentError
errors.add(:base, "invalid schedule")
end
else
errors.add(:base, "schedule is missing")
end
end
before_save do
self.memory.delete('scheduled_at') if self.options_changed?
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/jq_agent.rb | app/models/agents/jq_agent.rb | require 'open3'
module Agents
class JqAgent < Agent
cannot_be_scheduled!
can_dry_run!
def self.should_run?
!!jq_version
end
def self.jq_command
ENV['USE_JQ'].presence
end
def self.jq_version
if command = jq_command
Open3.capture2(command, '--version', 2 => IO::NULL).first[/\Ajq-\K\S+/]
end
end
def self.jq_info
if version = jq_version
"jq version #{version} is installed"
else
"**This agent is not enabled on this server**"
end
end
gem_dependency_check { jq_version }
description <<~MD
The Jq Agent allows you to process incoming Events with [jq](https://stedolan.github.io/jq/) the JSON processor. (#{jq_info})
It allows you to filter, transform and restructure Events in the way you want using jq's powerful features.
You can specify a jq filter expression to apply to each incoming event in `filter`, and results it produces will become Events to be emitted.
You can optionally pass in variables to the filter program by specifying key-value pairs of a variable name and an associated value in the `variables` key, each of which becomes a predefined variable.
This Agent can be used to parse a complex JSON structure that is too hard to handle with JSONPath or Liquid templating.
For example, suppose that a Post Agent created an Event which contains a `body` key with a value of the JSON formatted string of the following response body:
{
"status": "1",
"since": "1245626956",
"list": {
"93817": {
"item_id": "93817",
"url": "http://url.com",
"title": "Page Title",
"time_updated": "1245626956",
"time_added": "1245626956",
"tags": "comma,seperated,list",
"state": "0"
},
"935812": {
"item_id": "935812",
"url": "http://google.com",
"title": "Google",
"time_updated": "1245635279",
"time_added": "1245635279",
"tags": "comma,seperated,list",
"state": "1"
}
}
}
Then you could have a Jq Agent with the following jq filter:
.body | fromjson | .list | to_entries | map(.value) | map(try(.tags |= split(",")) // .) | sort_by(.time_added | tonumber)
To get the following two Events emitted out of the said incoming Event from Post Agent:
[
{
"item_id": "93817",
"url": "http://url.com",
"title": "Page Title",
"time_updated": "1245626956",
"time_added": "1245626956",
"tags": ["comma", "seperated", "list"],
"state": "0"
},
{
"item_id": "935812",
"url": "http://google.com",
"title": "Google",
"time_updated": "1245626956",
"time_added": "1245626956",
"tags": ["comma", "seperated", "list"],
"state": "1"
}
]
MD
def validate_options
errors.add(:base, "filter needs to be present.") if !options['filter'].is_a?(String)
errors.add(:base,
"variables must be a hash if present.") if options.key?('variables') && !options['variables'].is_a?(Hash)
end
def default_options
{
'filter' => '.',
'variables' => {}
}
end
def working?
self.class.should_run? && !recent_error_logs?
end
def receive(incoming_events)
if !self.class.should_run?
log("Unable to run because this agent is not enabled. Edit the USE_JQ environment variable.")
return
end
incoming_events.each do |event|
interpolate_with(event) do
process_event(event)
end
end
end
private
def get_variables
variables = interpolated['variables']
return {} if !variables.is_a?(Hash)
variables.map { |name, value|
[name.to_s, value.to_json]
}.to_h
end
def process_event(event)
Tempfile.create do |file|
filter = interpolated['filter'].to_s
# There seems to be no way to force jq to treat an arbitrary
# string as a filter without being confused with a command
# line option, so pass one via file.
file.print filter
file.close
variables = get_variables
command_args = [
self.class.jq_command,
'--compact-output',
'--sort-keys',
'--from-file', file.path,
*variables.flat_map { |name, json|
['--argjson', name, json]
}
]
log [
"Running jq with filter: #{filter}",
*variables.map { |name, json| "variable: #{name} = #{json}" }
].join("\n")
Open3.popen3(*command_args) do |stdin, stdout, stderr, wait_thread|
stderr_reader = Thread.new { stderr.read }
stdout_reader = Thread.new { stdout.each_line.flat_map { |line| JSON.parse(line) } }
results, errout, status =
begin
JSON.dump(event.payload, stdin)
stdin.close
[
stdout_reader.value,
stderr_reader.value,
wait_thread.value
]
rescue Errno::EPIPE
end
if !status.success?
error "Error output from jq:\n#{errout}"
return
end
results.keep_if do |result|
if result.is_a?(Hash)
true
else
error "Ignoring a non-object result: #{result.to_json}"
false
end
end
log "Creating #{results.size} events"
results.each do |payload|
create_event(payload:)
end
end
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/google_translation_agent.rb | app/models/agents/google_translation_agent.rb | module Agents
class GoogleTranslationAgent < Agent
cannot_be_scheduled!
can_dry_run!
gem_dependency_check do
require 'google/cloud/translate/v2'
rescue LoadError
false
else
true
end
description <<~MD
The Translation Agent will attempt to translate text between natural languages.
#{'## Include `google-api-client` in your Gemfile to use this Agent!' if dependencies_missing?}
Services are provided using Google Translate. You can [sign up](https://cloud.google.com/translate/) to get `google_api_key` which is required to use this agent.
The service is **not free**.
To use credentials for the `google_api_key` use the liquid `credential` tag like so `{% credential google-api-key %}`
`to` must be filled with a [translator language code](https://cloud.google.com/translate/docs/languages).
`from` is the language translated from. If it's not specified, the API will attempt to detect the source language automatically and return it within the response.
Specify an object in `content` field using [Liquid](https://github.com/huginn/huginn/wiki/Formatting-Events-using-Liquid) expressions, which will be evaluated for each incoming event, and then translated to become the payload of the new event.
You can specify a nested object of any levels containing arrays and objects, and all string values except for object keys will be recursively translated.
Set `mode` to `merge` if you want to merge each translated content with the original event payload. The default behavior (`clean`) is to emit events with only translated contents.
`expected_receive_period_in_days` is the maximum number of days you would allow to pass between events.
MD
event_description "User defined"
def default_options
{
'mode' => 'clean',
'to' => 'sv',
'from' => 'en',
'google_api_key' => '',
'expected_receive_period_in_days' => 1,
'content' => {
'text' => "{{message}}",
'moretext' => "{{another_message}}"
}
}
end
def working?
last_receive_at && last_receive_at > interpolated['expected_receive_period_in_days'].to_i.days.ago && !recent_error_logs?
end
def validate_options
unless options['google_api_key'].present? && options['to'].present? && options['content'].present? && options['expected_receive_period_in_days'].present?
errors.add :base, "google_api_key, to, content and expected_receive_period_in_days are all required"
end
case options['mode'].presence
when nil, /\A(?:clean|merge)\z|\{/
# ok
else
errors.add(:base, "mode must be 'clean' or 'merge'")
end
end
def receive(incoming_events)
incoming_events.each do |event|
interpolate_with(event) do
translated_content = translate(interpolated['content'])
case interpolated['mode']
when 'merge'
create_event payload: event.payload.merge(translated_content)
else
create_event payload: translated_content
end
end
end
end
def translate(content)
if !content.is_a?(Hash)
error("content must be an object, but it is #{content.class}.")
return
end
api = Google::Cloud::Translate::V2.new(
key: interpolated['google_api_key']
)
texts = []
walker = ->(value) {
case value
in nil | Numeric | true | false
in _ if _.blank?
in String
texts << value
in Array
value.each(&walker)
in Hash
value.each_value(&walker)
end
}
walker.call(content)
translations =
if texts.empty?
[]
else
api.translate(
*texts,
from: interpolated['from'].presence,
to: interpolated['to'],
format: 'text',
)
end
# Hash key order should be constant in Ruby
mapper = ->(value) {
case value
in nil | Numeric | true | false
value
in _ if _.blank?
value
in String
translations&.shift&.text
in Array
value.map(&mapper)
in Hash
value.transform_values(&mapper)
end
}
mapper.call(content)
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/manual_event_agent.rb | app/models/agents/manual_event_agent.rb | module Agents
class ManualEventAgent < Agent
cannot_be_scheduled!
cannot_receive_events!
description <<~MD
The Manual Event Agent is used to manually create Events for testing or other purposes.
Connect this Agent to other Agents and create Events using the UI provided on this Agent's Summary page.
You can set the default event payload via the "payload" option.
MD
event_description do
"Events are editable in the UI. The default value is this:\n\n " +
Utils.pretty_print(options["payload"].presence || {})
end
def default_options
{ "payload" => {} }
end
def handle_details_post(params)
if params['payload']
json = interpolate_options(JSON.parse(params['payload']))
if json['payloads'] && (json.keys - ['payloads']).length > 0
{ success: false,
error: "If you provide the 'payloads' key, please do not provide any other keys at the top level." }
else
[json['payloads'] || json].flatten.each do |payload|
create_event(payload:)
end
{ success: true }
end
else
{ success: false, error: "You must provide a JSON payload" }
end
end
def working?
true
end
def validate_options
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/imap_folder_agent.rb | app/models/agents/imap_folder_agent.rb | require 'base64'
require 'delegate'
require 'net/imap'
require 'mail'
module Agents
class ImapFolderAgent < Agent
include GoogleOauth2Concern
include EventHeadersConcern
cannot_receive_events!
can_dry_run!
default_schedule "every_30m"
description <<~MD
The Imap Folder Agent checks an IMAP server in specified folders and creates Events based on new mails found since the last run. In the first visit to a folder, this agent only checks for the initial status and does not create events.
Specify an IMAP server to connect with `host`, and set `ssl` to true if the server supports IMAP over SSL. Specify `port` if you need to connect to a port other than standard (143 or 993 depending on the `ssl` value), and specify login credentials in `username` and `password`.
Alternatively, if you want to use Gmail, go to the Services page and authenticate with Google beforehand, and then select the service. In this case, `host`, `ssl`, `port`, `username` and `password` are unnecessary and will be ignored.
List the names of folders to check in `folders`.
Specify an array of MIME types in 'mime_types' to tell which non-attachment part of a mail among its `text/*` parts should be used as mail body. The default value is `['text/plain', 'text/enriched', 'text/html']`.
To narrow mails by conditions, build a `conditions` hash with the following keys:
- `subject`
- `body`
Specify a regular expression to match against the decoded subject/body of each mail.
Use the `(?i)` directive for case-insensitive search. For example, a pattern `(?i)alert` will match "alert", "Alert"or "ALERT". You can also make only a part of a pattern to work case-insensitively: `Re: (?i:alert)` will match either "Re: Alert" or "Re: alert", but not "RE: alert".
When a mail has multiple non-attachment text parts, they are prioritized according to the `mime_types` option (as mentioned above) and the first part that matches a "body" pattern, if specified, will be chosen as the "body" value in a created event.
Named captures will appear in the "matches" hash in a created event.
- `from`, `to`, `cc`
Specify a shell glob pattern string that is matched against mail addresses extracted from the corresponding header values of each mail.
Patterns match addresses in case insensitive manner.
Multiple pattern strings can be specified in an array, in which case a mail is selected if any of the patterns matches. (i.e. patterns are OR'd)
- `is_unread`
Setting this to true or false means only mails that is marked as unread or read respectively, are selected.
If this key is unspecified or set to null, it is ignored.
- `has_attachment`
Setting this to true or false means only mails that does or does not have an attachment are selected.
If this key is unspecified or set to null, it is ignored.
Set `mark_as_read` to true to mark found mails as read.
Set `delete` to true to delete found mails.
Set `event_headers` to a list of header names you want to include in a `headers` hash in each created event, either in an array of string or in a comma-separated string.
Set `event_headers_style` to one of the following values to normalize the keys of "headers" for downstream agents' convenience:
* `capitalized` (default) - Header names are capitalized; e.g. "Content-Type"
* `downcased` - Header names are downcased; e.g. "content-type"
* `snakecased` - Header names are snakecased; e.g. "content_type"
Set `include_raw_mail` to true to add a `raw_mail` value to each created event, which contains a *Base64-encoded* blob in the "RFC822" format defined in [the IMAP4 standard](https://tools.ietf.org/html/rfc3501). Note that while the result of Base64 encoding will be LF-terminated, its raw content will often be CRLF-terminated because of the nature of the e-mail protocols and formats. The primary use case for a raw mail blob is to pass to a Shell Command Agent with a command like `openssl enc -d -base64 | tr -d '\r' | procmail -Yf-`.
Each agent instance memorizes the highest UID of mails that are found in the last run for each watched folder, so even if you change a set of conditions so that it matches mails that are missed previously, or if you alter the flag status of already found mails, they will not show up as new events.
Also, in order to avoid duplicated notification it keeps a list of Message-Id's of 100 most recent mails, so if multiple mails of the same Message-Id are found, you will only see one event out of them.
MD
event_description <<~MD
Events look like this:
{
"message_id": "...(Message-Id without angle brackets)...",
"folder": "INBOX",
"subject": "...",
"from": "Nanashi <nanashi.gombeh@example.jp>",
"to": ["Jane <jane.doe@example.com>"],
"cc": [],
"date": "2014-05-10T03:47:20+0900",
"mime_type": "text/plain",
"body": "Hello,\n\n...",
"matches": {
}
}
Additionally, "headers" will be included if the `event_headers` option is set, and "raw_mail" if the `include_raw_mail` option is set.
MD
IDCACHE_SIZE = 100
FNM_FLAGS = [:FNM_CASEFOLD, :FNM_EXTGLOB].inject(0) { |flags, sym|
if File.const_defined?(sym)
flags | File.const_get(sym)
else
flags
end
}
def working?
event_created_within?(interpolated['expected_update_period_in_days']) && !recent_error_logs?
end
def default_options
{
'expected_update_period_in_days' => "1",
'host' => 'imap.gmail.com',
'ssl' => true,
'username' => 'your.account',
'password' => 'your.password',
'folders' => %w[INBOX],
'conditions' => {}
}
end
def validate_options
if !service
%w[host username password].each { |key|
String === options[key] or
errors.add(:base, '%s is required and must be a string' % key)
}
end
if options['port'].present?
errors.add(:base, "port must be a positive integer") unless is_positive_integer?(options['port'])
end
%w[ssl mark_as_read delete include_raw_mail].each { |key|
if options[key].present? && boolify(options[key]).nil?
errors.add(:base, '%s must be a boolean value' % key)
end
}
case mime_types = options['mime_types']
when nil
when Array
mime_types.all? { |mime_type|
String === mime_type && mime_type.start_with?('text/')
} or errors.add(:base, 'mime_types may only contain strings that match "text/*".')
if mime_types.empty?
errors.add(:base, 'mime_types should not be empty')
end
else
errors.add(:base, 'mime_types must be an array')
end
case folders = options['folders']
when nil
when Array
folders.all? { |folder|
String === folder
} or errors.add(:base, 'folders may only contain strings')
if folders.empty?
errors.add(:base, 'folders should not be empty')
end
else
errors.add(:base, 'folders must be an array')
end
case conditions = options['conditions']
when Hash
conditions.each { |key, value|
value.present? or next
case key
when 'subject', 'body'
case value
when String
begin
Regexp.new(value)
rescue StandardError
errors.add(:base, 'conditions.%s contains an invalid regexp' % key)
end
else
errors.add(:base, 'conditions.%s contains a non-string object' % key)
end
when 'from', 'to', 'cc'
Array(value).each { |pattern|
case pattern
when String
begin
glob_match?(pattern, '')
rescue StandardError
errors.add(:base, 'conditions.%s contains an invalid glob pattern' % key)
end
else
errors.add(:base, 'conditions.%s contains a non-string object' % key)
end
}
when 'is_unread', 'has_attachment'
case boolify(value)
when true, false
else
errors.add(:base, 'conditions.%s must be a boolean value or null' % key)
end
end
}
else
errors.add(:base, 'conditions must be a hash')
end
if options['expected_update_period_in_days'].present?
errors.add(
:base,
"Invalid expected_update_period_in_days format"
) unless is_positive_integer?(options['expected_update_period_in_days'])
end
end
def validate_service
# Override Oauthable#validate_service; service is optional in
# this agent.
end
def check
each_unread_mail { |mail, notified|
message_id = mail.message_id
body_parts = mail.body_parts(mime_types)
matched_part = nil
matches = {}
interpolated['conditions'].all? { |key, value|
case key
when 'subject'
value.present? or next true
re = Regexp.new(value)
if m = re.match(mail.scrubbed(:subject))
m.names.each { |name|
matches[name] = m[name]
}
true
else
false
end
when 'body'
value.present? or next true
re = Regexp.new(value)
matched_part = body_parts.find { |part|
if m = re.match(part.scrubbed(:decoded))
m.names.each { |name|
matches[name] = m[name]
}
true
else
false
end
}
when 'from', 'to', 'cc'
value.present? or next true
begin
# Mail::Field really needs to define respond_to_missing?
# so we could use try(:addresses) here.
addresses = mail.header[key].addresses
rescue NoMethodError
next false
end
addresses.any? { |address|
Array(value).any? { |pattern|
glob_match?(pattern, address)
}
}
when 'has_attachment'
boolify(value) == mail.has_attachment?
when 'is_unread'
true # already filtered out by each_unread_mail
else
log 'Unknown condition key ignored: %s' % key
true
end
} or next
if notified.include?(mail.message_id)
log 'Ignoring mail: %s (already notified)' % message_id
else
matched_part ||= body_parts.first
if matched_part
mime_type = matched_part.mime_type
body = matched_part.scrubbed(:decoded)
else
mime_type = 'text/plain'
body = ''
end
log 'Emitting an event for mail: %s' % message_id
payload = {
'message_id' => message_id,
'folder' => mail.folder,
'subject' => mail.scrubbed(:subject),
'from' => mail.from_addrs.first,
'to' => mail.to_addrs,
'cc' => mail.cc_addrs,
'date' =>
begin
mail.date.iso8601
rescue StandardError
nil
end,
'mime_type' => mime_type,
'body' => body,
'matches' => matches,
'has_attachment' => mail.has_attachment?,
}
if boolify(interpolated['include_raw_mail'])
payload['raw_mail'] = Base64.encode64(mail.raw_mail)
end
if interpolated['event_headers'].present?
headers = mail.header.each_with_object({}) { |field, hash|
name = field.name
hash[name] = (v = hash[name]) ? "#{v}\n#{field.value}" : field.value.to_s
}
payload.update(event_headers_payload(headers))
end
create_event(payload:)
notified << mail.message_id if mail.message_id
end
if boolify(interpolated['mark_as_read'])
log 'Marking as read'
mail.mark_as_read unless dry_run?
end
if boolify(interpolated['delete'])
log 'Deleting'
mail.delete unless dry_run?
end
}
end
def each_unread_mail
if service
host = 'imap.gmail.com'
port = 993
ssl = true
username = google_oauth2_email
password = google_oauth2_access_token
else
host, port, ssl, username = interpolated.values_at(:host, :port, :ssl, :username)
password = interpolated[:password]
end
ssl = boolify(ssl)
port = (Integer(port) if port.present?)
log "Connecting to #{host}#{':%d' % port if port}#{' via SSL' if ssl}"
Client.open(host, port:, ssl:) { |imap|
log "Logging in as #{username}"
if service
imap.authenticate('XOAUTH2', username, password)
else
imap.login(username, password)
end
# 'lastseen' keeps a hash of { uidvalidity => lastseenuid, ... }
lastseen = self.lastseen
seen = self.make_seen
# 'notified' keeps an array of message-ids of {IDCACHE_SIZE}
# most recent notified mails.
notified = self.notified
interpolated['folders'].each { |folder|
log "Selecting the folder: %s" % folder
imap.select(Net::IMAP.encode_utf7(folder))
uidvalidity = imap.uidvalidity
lastseenuid = lastseen[uidvalidity]
if lastseenuid.nil?
maxseq = imap.responses['EXISTS'].last
log "Recording the initial status: %s" % pluralize(maxseq, 'existing mail')
if maxseq > 0
seen[uidvalidity] = imap.fetch(maxseq, 'UID').last.attr['UID']
end
next
end
seen[uidvalidity] = lastseenuid
is_unread = boolify(interpolated['conditions']['is_unread'])
uids = imap.uid_fetch((lastseenuid + 1)..-1, 'FLAGS')
.each_with_object([]) { |data, ret|
uid, flags = data.attr.values_at('UID', 'FLAGS')
seen[uidvalidity] = uid
next if uid <= lastseenuid
case is_unread
when nil, !flags.include?(:Seen)
ret << uid
end
}
log pluralize(uids.size,
case is_unread
when true
'new unread mail'
when false
'new read mail'
else
'new mail'
end)
next if uids.empty?
imap.uid_fetch_mails(uids).each { |mail|
yield mail, notified
}
}
self.notified = notified
self.lastseen = seen
save!
}
ensure
log 'Connection closed'
end
def mime_types
interpolated['mime_types'] || %w[text/plain text/enriched text/html]
end
def lastseen
Seen.new(memory['lastseen'])
end
def lastseen=(value)
memory.delete('seen') # obsolete key
memory['lastseen'] = value
end
def make_seen
Seen.new
end
def notified
Notified.new(memory['notified'])
end
def notified=(value)
memory['notified'] = value
end
private
def glob_match?(pattern, value)
File.fnmatch?(pattern, value, FNM_FLAGS)
end
def pluralize(count, noun)
"%d %s" % [count, noun.pluralize(count)]
end
def event_headers_key
super || 'headers'
end
class Client < ::Net::IMAP
class << self
def open(host, *args)
imap = new(host, *args)
yield imap
ensure
imap.disconnect unless imap.nil?
end
private
def authenticators
# The authenticators table is stored in the Net::IMAP instance.
Net::IMAP.send(:authenticators)
end
end
attr_reader :uidvalidity
def select(folder)
ret = super(@folder = folder)
@uidvalidity = responses['UIDVALIDITY'].last
ret
end
def fetch(*args)
super || []
end
def uid_fetch(*args)
super || []
end
def uid_fetch_mails(set)
uid_fetch(set, 'RFC822.HEADER').map { |data|
Message.new(self, data, folder: @folder, uidvalidity: @uidvalidity)
}
end
end
class Seen < Hash
def initialize(hash = nil)
super()
if hash
# Deserialize a JSON hash which keys are strings
hash.each { |uidvalidity, uid|
self[uidvalidity.to_i] = uid
}
end
end
def []=(uidvalidity, uid)
# Update only if the new value is larger than the current value
if (curr = self[uidvalidity]).nil? || curr <= uid
super
end
end
end
class Notified < Array
def initialize(array = nil)
super()
replace(array) if array
end
def <<(value)
slice!(0...-IDCACHE_SIZE) if size > IDCACHE_SIZE
super
end
end
class Message < SimpleDelegator
DEFAULT_BODY_MIME_TYPES = %w[text/plain text/enriched text/html]
attr_reader :uid, :folder, :uidvalidity
module Scrubbed
def scrubbed(method)
(@scrubbed ||= {})[method.to_sym] ||=
__send__(method).try(:scrub) { |bytes| "<#{bytes.unpack1('H*')}>" }
end
end
include Scrubbed
def initialize(client, fetch_data, props = {})
@client = client
props.each { |key, value|
instance_variable_set(:"@#{key}", value)
}
attr = fetch_data.attr
@uid = attr['UID']
super(Mail.read_from_string(attr['RFC822.HEADER']))
end
def has_attachment?
@has_attachment ||=
if data = @client.uid_fetch(@uid, 'BODYSTRUCTURE').first
struct_has_attachment?(data.attr['BODYSTRUCTURE'])
else
false
end
end
def raw_mail
@raw_mail ||=
if data = @client.uid_fetch(@uid, 'BODY.PEEK[]').first
data.attr['BODY[]']
else
''
end
end
def fetch
@parsed ||= Mail.read_from_string(raw_mail)
end
def body_parts(mime_types = DEFAULT_BODY_MIME_TYPES)
mail = fetch
if mail.multipart?
mail.body.set_sort_order(mime_types)
mail.body.sort_parts!
mail.all_parts
else
[mail]
end.select { |part|
if part.multipart? || part.attachment? || !part.text? ||
!mime_types.include?(part.mime_type)
false
else
part.extend(Scrubbed)
true
end
}
end
def mark_as_read
@client.uid_store(@uid, '+FLAGS', [:Seen])
end
def delete
@client.uid_store(@uid, '+FLAGS', [:Deleted])
@client.expunge
end
private
def struct_has_attachment?(struct)
struct.multipart? && (
struct.subtype == 'MIXED' ||
struct.parts.any? { |part|
struct_has_attachment?(part)
}
)
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/twitter_search_agent.rb | app/models/agents/twitter_search_agent.rb | module Agents
class TwitterSearchAgent < Agent
include TwitterConcern
can_dry_run!
cannot_receive_events!
description <<~MD
The Twitter Search Agent performs and emits the results of a specified Twitter search.
#{twitter_dependencies_missing if dependencies_missing?}
If you want realtime data from Twitter about frequent terms, you should definitely use the Twitter Stream Agent instead.
To be able to use this Agent you need to authenticate with Twitter in the [Services](/services) section first.
You must provide the desired `search`.
Set `result_type` to specify which [type of search results](https://dev.twitter.com/rest/reference/get/search/tweets) you would prefer to receive. Options are "mixed", "recent", and "popular". (default: `mixed`)
Set `max_results` to limit the amount of results to retrieve per run(default: `500`. The API rate limit is ~18,000 per 15 minutes. [Click here to learn more about rate limits](https://dev.twitter.com/rest/public/rate-limiting).
Set `expected_update_period_in_days` to the maximum amount of time that you'd expect to pass between Events being created by this Agent.
Set `starting_at` to the date/time (eg. `Mon Jun 02 00:38:12 +0000 2014`) you want to start receiving tweets from (default: agent's `created_at`)
MD
event_description <<~MD
Events are the raw JSON provided by the [Twitter API v1.1](https://developer.twitter.com/en/docs/twitter-api/v1/tweets/search/api-reference/get-search-tweets) with slight modifications. They should look something like this:
#{tweet_event_description('full_text')}
MD
default_schedule "every_1h"
def working?
event_created_within?(interpolated[:expected_update_period_in_days]) && !recent_error_logs?
end
def default_options
{
'search' => 'freebandnames',
'expected_update_period_in_days' => '2'
}
end
def validate_options
if options[:search].blank?
errors.add(:base, "search is required")
end
if options[:expected_update_period_in_days].blank?
errors.add(:base, "expected_update_period_in_days is required")
end
if options[:starting_at].present?
begin
Time.parse(interpolated[:starting_at])
rescue StandardError
errors.add(:base, "Error parsing starting_at")
end
end
end
def starting_at
if interpolated[:starting_at].present?
begin
Time.parse(interpolated[:starting_at])
rescue StandardError
end
end || created_at || Time.now # for dry-running
end
def max_results
(interpolated[:max_results].presence || 500).to_i
end
def check
opts = {
include_entities: true,
tweet_mode: 'extended',
result_type: interpolated[:result_type].presence,
since_id: memory[:since_id].presence,
}.compact
# http://www.rubydoc.info/gems/twitter/Twitter/REST/Search
tweets = twitter.search(interpolated[:search], opts).take(max_results)
tweets.each do |tweet|
next unless tweet.created_at >= starting_at
memory[:since_id] = [tweet.id, *memory[:since_id]].max
create_event(payload: format_tweet(tweet))
end
save!
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/jabber_agent.rb | app/models/agents/jabber_agent.rb | module Agents
class JabberAgent < Agent
include LongRunnable
include FormConfigurable
cannot_be_scheduled!
gem_dependency_check { defined?(Jabber) }
description <<~MD
The Jabber Agent will send any events it receives to your Jabber/XMPP IM account.
#{'## Include `xmpp4r` in your Gemfile to use this Agent!' if dependencies_missing?}
Specify the `jabber_server` and `jabber_port` for your Jabber server.
The `message` is sent from `jabber_sender` to `jaber_receiver`. This message
can contain any keys found in the source's payload, escaped using double curly braces.
ex: `"News Story: {{title}}: {{url}}"`
When `connect_to_receiver` is set to true, the JabberAgent will emit an event for every message it receives.
Have a look at the [Wiki](https://github.com/huginn/huginn/wiki/Formatting-Events-using-Liquid) to learn more about liquid templating.
MD
event_description <<~MD
`event` will be set to either `on_join`, `on_leave`, `on_message`, `on_room_message` or `on_subject`
{
"event": "on_message",
"time": null,
"nick": "Dominik Sander",
"message": "Hello from huginn."
}
MD
def default_options
{
'jabber_server' => '127.0.0.1',
'jabber_port' => '5222',
'jabber_sender' => 'huginn@localhost',
'jabber_receiver' => 'muninn@localhost',
'jabber_password' => '',
'message' => 'It will be {{temp}} out tomorrow',
'expected_receive_period_in_days' => "2"
}
end
form_configurable :jabber_server
form_configurable :jabber_port
form_configurable :jabber_sender
form_configurable :jabber_receiver
form_configurable :jabber_password
form_configurable :message, type: :text
form_configurable :connect_to_receiver, type: :boolean
form_configurable :expected_receive_period_in_days
def working?
last_receive_at && last_receive_at > interpolated['expected_receive_period_in_days'].to_i.days.ago && !recent_error_logs?
end
def receive(incoming_events)
incoming_events.each do |event|
log "Sending IM to #{interpolated['jabber_receiver']} with event #{event.id}"
deliver body(event)
end
end
def validate_options
errors.add(:base, "server and username is required") unless credentials_present?
end
def deliver(text)
client.send Jabber::Message.new(interpolated['jabber_receiver'], text).set_type(:chat)
end
def start_worker?
boolify(interpolated[:connect_to_receiver])
end
private
def client
Jabber::Client.new(Jabber::JID.new(interpolated['jabber_sender'])).tap do |sender|
sender.connect(interpolated['jabber_server'], interpolated['jabber_port'] || '5222')
sender.auth interpolated['jabber_password']
end
end
def credentials_present?
options['jabber_server'].present? && options['jabber_sender'].present? && options['jabber_receiver'].present?
end
def body(event)
interpolated(event)['message']
end
class Worker < LongRunnable::Worker
IGNORE_MESSAGES_FOR = 5
def setup
require 'xmpp4r/muc/helper/simplemucclient'
end
def run
@started_at = Time.now
@client = client
muc = Jabber::MUC::SimpleMUCClient.new(@client)
[:on_join, :on_leave, :on_message, :on_room_message, :on_subject].each do |event|
muc.__send__(event) do |*args|
message_handler(event, args)
end
end
muc.join(agent.interpolated['jabber_receiver'])
sleep(1) while @client.is_connected?
end
def message_handler(event, args)
return if Time.now - @started_at < IGNORE_MESSAGES_FOR
time, nick, message = normalize_args(event, args)
AgentRunner.with_connection do
agent.create_event(payload: { event:, time:, nick:, message: })
end
end
def stop
@client.close
@client.stop
thread.terminate
end
def client
agent.send(:client)
end
private
def normalize_args(event, args)
case event
when :on_join, :on_leave
[args[0], args[1]]
when :on_message, :on_subject
args
when :on_room_message
[args[0], nil, args[1]]
end
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/trigger_agent.rb | app/models/agents/trigger_agent.rb | module Agents
class TriggerAgent < Agent
cannot_be_scheduled!
can_dry_run!
VALID_COMPARISON_TYPES = %w[
regex
!regex
field<value
field<=value
field==value
field!=value
field>=value
field>value
not\ in
]
description <<~MD
The Trigger Agent will watch for a specific value in an Event payload.
The `rules` array contains a mixture of strings and hashes.
A string rule is a Liquid template and counts as a match when it expands to `true`.
A hash rule consists of the following keys: `path`, `value`, and `type`.
The `path` value is a dotted path through a hash in [JSONPaths](http://goessner.net/articles/JsonPath/) syntax. For simple events, this is usually just the name of the field you want, like 'text' for the text key of the event.
The `type` can be one of #{VALID_COMPARISON_TYPES.map { |t| "`#{t}`" }.to_sentence} and compares with the `value`. Note that regex patterns are matched case insensitively. If you want case sensitive matching, prefix your pattern with `(?-i)`.
In any `type` including regex Liquid variables can be used normally. To search for just a word matching the concatenation of `foo` and variable `bar` would use `value` of `foo{{bar}}`. Note that starting/ending delimiters like `/` or `|` are not required for regex.
The `value` can be a single value or an array of values. In the case of an array, all items must be strings, and if one or more values match, then the rule matches. Note: avoid using `field!=value` with arrays, you should use `not in` instead.
By default, all rules must match for the Agent to trigger. You can switch this so that only one rule must match by
setting `must_match` to `1`.
The resulting Event will have a payload message of `message`. You can use liquid templating in the `message, have a look at the [Wiki](https://github.com/huginn/huginn/wiki/Formatting-Events-using-Liquid) for details.
Set `keep_event` to `true` if you'd like to re-emit the incoming event, optionally merged with 'message' when provided.
Set `expected_receive_period_in_days` to the maximum amount of time that you'd expect to pass between Events being received by this Agent.
MD
event_description <<~MD
Events look like this:
{ "message": "Your message" }
MD
private def valid_rule?(rule)
case rule
when String
true
when Hash
VALID_COMPARISON_TYPES.include?(rule['type']) &&
/\S/.match?(rule['path']) &&
rule.key?('value')
else
false
end
end
def validate_options
unless options['expected_receive_period_in_days'].present? &&
options['rules'].present? &&
options['rules'].all? { |rule| valid_rule?(rule) }
errors.add(:base,
"expected_receive_period_in_days, message, and rules, with a type, value, and path for every rule, are required")
end
errors.add(:base,
"message is required unless 'keep_event' is 'true'") unless options['message'].present? || keep_event?
errors.add(:base,
"keep_event, when present, must be 'true' or 'false'") unless options['keep_event'].blank? || %w[
true false
].include?(options['keep_event'])
if options['must_match'].present?
if options['must_match'].to_i < 1
errors.add(:base, "If used, the 'must_match' option must be a positive integer")
elsif options['must_match'].to_i > options['rules'].length
errors.add(:base, "If used, the 'must_match' option must be equal to or less than the number of rules")
end
end
end
def default_options
{
'expected_receive_period_in_days' => "2",
'keep_event' => 'false',
'rules' => [{
'type' => "regex",
'value' => "foo\\d+bar",
'path' => "topkey.subkey.subkey.goal",
}],
'message' => "Looks like your pattern matched in '{{value}}'!"
}
end
def working?
last_receive_at && last_receive_at > interpolated['expected_receive_period_in_days'].to_i.days.ago && !recent_error_logs?
end
def receive(incoming_events)
incoming_events.each do |event|
opts = interpolated(event)
match_results = opts['rules'].map do |rule|
if rule.is_a?(String)
next boolify(rule)
end
value_at_path = Utils.value_at(event['payload'], rule['path'])
rule_values = rule['value']
rule_values = [rule_values] unless rule_values.is_a?(Array)
if rule['type'] == 'not in'
!rule_values.include?(value_at_path.to_s)
elsif rule['type'] == 'field==value'
rule_values.include?(value_at_path.to_s)
else
rule_values.any? do |rule_value|
case rule['type']
when "regex"
value_at_path.to_s =~ Regexp.new(rule_value, Regexp::IGNORECASE)
when "!regex"
value_at_path.to_s !~ Regexp.new(rule_value, Regexp::IGNORECASE)
when "field>value"
value_at_path.to_f > rule_value.to_f
when "field>=value"
value_at_path.to_f >= rule_value.to_f
when "field<value"
value_at_path.to_f < rule_value.to_f
when "field<=value"
value_at_path.to_f <= rule_value.to_f
when "field!=value"
value_at_path.to_s != rule_value.to_s
else
raise "Invalid type of #{rule['type']} in TriggerAgent##{id}"
end
end
end
end
next unless matches?(match_results)
if keep_event?
payload = event.payload.dup
payload['message'] = opts['message'] if opts['message'].present?
else
payload = { 'message' => opts['message'] }
end
create_event(payload:)
end
end
def matches?(matches)
if options['must_match'].present?
matches.select { |match| match }.length >= options['must_match'].to_i
else
matches.all?
end
end
def keep_event?
boolify(interpolated['keep_event'])
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/sentiment_agent.rb | app/models/agents/sentiment_agent.rb | require 'csv'
module Agents
class SentimentAgent < Agent
class_attribute :anew
cannot_be_scheduled!
description <<~MD
The Sentiment Agent generates `good-bad` (psychological valence or happiness index), `active-passive` (arousal), and `strong-weak` (dominance) score. It will output a value between 1 and 9. It will only work on English content.
Make sure the content this agent is analyzing is of sufficient length to get respectable results.
Provide a JSONPath in `content` field where content is residing and set `expected_receive_period_in_days` to the maximum number of days you would allow to be passed between events being received by this agent.
MD
event_description <<~MD
Events look like:
{
"content": "The quick brown fox jumps over the lazy dog.",
"valence": 6.196666666666666,
"arousal": 4.993333333333333,
"dominance": 5.63
}
MD
def default_options
{
'content' => "$.message.text[*]",
'expected_receive_period_in_days' => 1
}
end
def working?
last_receive_at && last_receive_at > interpolated['expected_receive_period_in_days'].to_i.days.ago && !recent_error_logs?
end
def receive(incoming_events)
anew = self.class.sentiment_hash
incoming_events.each do |event|
Utils.values_at(event.payload, interpolated['content']).each do |content|
sent_values = sentiment_values anew, content
create_event payload: {
'content' => content,
'valence' => sent_values[0],
'arousal' => sent_values[1],
'dominance' => sent_values[2],
'original_event' => event.payload
}
end
end
end
def validate_options
errors.add(
:base,
"content and expected_receive_period_in_days must be present"
) unless options['content'].present? && options['expected_receive_period_in_days'].present?
end
def self.sentiment_hash
unless self.anew
self.anew = {}
CSV.foreach Rails.root.join('data/anew.csv') do |row|
self.anew[row[0]] = row.values_at(2, 4, 6).map { |val| val.to_f }
end
end
self.anew
end
def sentiment_values(anew, text)
valence, arousal, dominance, freq = [0] * 4
text.downcase.strip.gsub(/[^a-z ]/, "").split.each do |word|
next unless anew.has_key? word
valence += anew[word][0]
arousal += anew[word][1]
dominance += anew[word][2]
freq += 1
end
if valence != 0
[valence / freq, arousal / freq, dominance / freq]
else
["Insufficient data for meaningful answer"] * 3
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/event_formatting_agent.rb | app/models/agents/event_formatting_agent.rb | module Agents
class EventFormattingAgent < Agent
cannot_be_scheduled!
can_dry_run!
description <<~MD
The Event Formatting Agent allows you to format incoming Events, adding new fields as needed.
For example, here is a possible Event:
{
"high": {
"celsius": "18",
"fahreinheit": "64"
},
"date": {
"epoch": "1357959600",
"pretty": "10:00 PM EST on January 11, 2013"
},
"conditions": "Rain showers",
"data": "This is some data"
}
You may want to send this event to another Agent, for example a Twilio Agent, which expects a `message` key.
You can use an Event Formatting Agent's `instructions` setting to do this in the following way:
"instructions": {
"message": "Today's conditions look like {{conditions}} with a high temperature of {{high.celsius}} degrees Celsius.",
"subject": "{{data}}",
"created_at": "{{created_at}}"
}
Names here like `conditions`, `high` and `data` refer to the corresponding values in the Event hash.
The special key `created_at` refers to the timestamp of the Event, which can be reformatted by the `date` filter, like `{{created_at | date:"at %I:%M %p" }}`.
The upstream agent of each received event is accessible via the key `agent`, which has the following attributes: #{''.tap { |s| s << Agent::Drop.instance_methods(false).map { |m| "`#{m}`" }.join(', ') }}.
Have a look at the [Wiki](https://github.com/huginn/huginn/wiki/Formatting-Events-using-Liquid) to learn more about liquid templating.
Events generated by this possible Event Formatting Agent will look like:
{
"message": "Today's conditions look like Rain showers with a high temperature of 18 degrees Celsius.",
"subject": "This is some data"
}
In `matchers` setting you can perform regular expression matching against contents of events and expand the match data for use in `instructions` setting. Here is an example:
{
"matchers": [
{
"path": "{{date.pretty}}",
"regexp": "\\A(?<time>\\d\\d:\\d\\d [AP]M [A-Z]+)",
"to": "pretty_date"
}
]
}
This virtually merges the following hash into the original event hash:
"pretty_date": {
"time": "10:00 PM EST",
"0": "10:00 PM EST on January 11, 2013"
"1": "10:00 PM EST"
}
You could also use the `regex_extract` filter to achieve the same goal.
So you can use it in `instructions` like this:
"instructions": {
"message": "Today's conditions look like {{conditions}} with a high temperature of {{high.celsius}} degrees Celsius according to the forecast at {{pretty_date.time}}.",
"subject": "{{data}}"
}
If you want to retain original contents of events and only add new keys, then set `mode` to `merge`, otherwise set it to `clean`.
To CGI escape output (for example when creating a link), use the Liquid `uri_escape` filter, like so:
{
"message": "A peak was on Twitter in {{group_by}}. Search: https://twitter.com/search?q={{group_by | uri_escape}}"
}
MD
event_description do
"Events will have the following fields%s:\n\n %s" % [
case options['mode'].to_s
when 'merge'
', merged with the original contents'
when /\{/
', conditionally merged with the original contents'
end,
Utils.pretty_print(Hash[options['instructions'].keys.map { |key|
[key, "..."]
}])
]
end
def validate_options
errors.add(:base,
"instructions and mode need to be present.") unless options['instructions'].present? && options['mode'].present?
if options['mode'].present? && !options['mode'].to_s.include?('{{') && !%(clean merge).include?(options['mode'].to_s)
errors.add(:base, "mode must be 'clean' or 'merge'")
end
validate_matchers
end
def default_options
{
'instructions' => {
'message' => "You received a text {{text}} from {{fields.from}}",
'agent' => "{{agent.type}}",
'some_other_field' => "Looks like the weather is going to be {{fields.weather}}"
},
'mode' => "clean",
}
end
def working?
!recent_error_logs?
end
def receive(incoming_events)
matchers = compiled_matchers
incoming_events.each do |event|
interpolate_with(event) do
apply_compiled_matchers(matchers, event) do
formatted_event = interpolated['mode'].to_s == "merge" ? event.payload.dup : {}
formatted_event.merge! interpolated['instructions']
create_event payload: formatted_event
end
end
end
end
private
def validate_matchers
matchers = options['matchers'] or return
unless matchers.is_a?(Array)
errors.add(:base, "matchers must be an array if present")
return
end
matchers.each do |matcher|
unless matcher.is_a?(Hash)
errors.add(:base, "each matcher must be a hash")
next
end
regexp, path, to = matcher.values_at(*%w[regexp path to])
if regexp.present?
begin
Regexp.new(regexp)
rescue StandardError
errors.add(:base, "bad regexp found in matchers: #{regexp}")
end
else
errors.add(:base, "regexp is mandatory for a matcher and must be a string")
end
errors.add(:base, "path is mandatory for a matcher and must be a string") if !path.present?
errors.add(:base, "to must be a string if present in a matcher") if to.present? && !to.is_a?(String)
end
end
def compiled_matchers
if matchers = options['matchers']
matchers.map { |matcher|
regexp, path, to = matcher.values_at(*%w[regexp path to])
[Regexp.new(regexp), path, to]
}
end
end
def apply_compiled_matchers(matchers, event, &block)
return yield if matchers.nil?
# event.payload.dup does not work; HashWithIndifferentAccess is
# a source of trouble here.
hash = {}.update(event.payload)
matchers.each do |re, path, to|
m = re.match(interpolate_string(path, hash)) or next
mhash =
if to
case value = hash[to]
when Hash
value
else
hash[to] = {}
end
else
hash
end
m.size.times do |i|
mhash[i.to_s] = m[i]
end
m.names.each do |name|
mhash[name] = m[name]
end
end
interpolate_with(hash, &block)
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/liquid_output_agent.rb | app/models/agents/liquid_output_agent.rb | module Agents
class LiquidOutputAgent < Agent
include FormConfigurable
cannot_be_scheduled!
cannot_create_events!
DATE_UNITS = %w[second seconds minute minutes hour hours day days week weeks month months year years]
description do
<<~MD
The Liquid Output Agent outputs events through a Liquid template you provide. Use it to create a HTML page, or a json feed, or anything else that can be rendered as a string from your stream of Huginn data.
This Agent will output data at:
`https://#{ENV['DOMAIN']}#{Rails.application.routes.url_helpers.web_requests_path(agent_id: ':id', user_id:, secret: ':secret', format: :any_extension)}`
where `:secret` is the secret specified in your options. You can use any extension you wish.
Options:
* `secret` - A token that the requestor must provide for light-weight authentication.
* `expected_receive_period_in_days` - How often you expect data to be received by this Agent from other Agents.
* `content` - The content to display when someone requests this page.
* `line_break_is_lf` - Use LF as line breaks instead of CRLF.
* `mime_type` - The mime type to use when someone requests this page.
* `response_headers` - An object with any custom response headers. (example: `{"Access-Control-Allow-Origin": "*"}`)
* `mode` - The behavior that determines what data is passed to the Liquid template.
* `event_limit` - A limit applied to the events passed to a template when in "Last X events" mode. Can be a count like "1", or an amount of time like "1 day" or "5 minutes".
# Liquid Templating
The content you provide will be run as a [Liquid](https://github.com/huginn/huginn/wiki/Formatting-Events-using-Liquid) template. The data from the last event received will be used when processing the Liquid template.
# Modes
### Merge events
The data for incoming events will be merged. So if two events come in like this:
```
{ 'a' => 'b', 'c' => 'd'}
{ 'a' => 'bb', 'e' => 'f'}
```
The final result will be:
```
{ 'a' => 'bb', 'c' => 'd', 'e' => 'f'}
```
This merged version will be passed to the Liquid template.
### Last event in
The data from the last event will be passed to the template.
### Last X events
All of the events received by this agent will be passed to the template as the `events` array.
The number of events can be controlled via the `event_limit` option.
If `event_limit` is an integer X, the last X events will be passed to the template.
If `event_limit` is an integer with a unit of measure like "1 day" or "5 minutes" or "9 years", a date filter will be applied to the events passed to the template.
If no `event_limit` is provided, then all of the events for the agent will be passed to the template.
For performance, the maximum `event_limit` allowed is 1000.
MD
end
def default_options
content = <<~EOF
When you use the "Last event in" or "Merge events" option, you can use variables from the last event received, like this:
Name: {{name}}
Url: {{url}}
If you use the "Last X Events" mode, a set of events will be passed to your Liquid template. You can use them like this:
<table class="table">
{% for event in events %}
<tr>
<td>{{ event.title }}</td>
<td><a href="{{ event.url }}">Click here to see</a></td>
</tr>
{% endfor %}
</table>
EOF
{
"secret" => "a-secret-key",
"expected_receive_period_in_days" => 2,
"mime_type" => 'text/html',
"mode" => 'Last event in',
"event_limit" => '',
"content" => content,
}
end
form_configurable :secret
form_configurable :expected_receive_period_in_days
form_configurable :content, type: :text
form_configurable :line_break_is_lf, type: :boolean
form_configurable :mime_type
form_configurable :mode, type: :array, values: ['Last event in', 'Merge events', 'Last X events']
form_configurable :event_limit
before_save :update_last_modified_at, if: :options_changed?
def working?
last_receive_at && last_receive_at > options['expected_receive_period_in_days'].to_i.days.ago && !recent_error_logs?
end
def validate_options
if options['secret'].present?
case options['secret']
when %r{[/.]}
errors.add(:base, "secret may not contain a slash or dot")
when String
else
errors.add(:base, "secret must be a string")
end
else
errors.add(:base, "Please specify one secret for 'authenticating' incoming feed requests")
end
unless options['expected_receive_period_in_days'].present? && options['expected_receive_period_in_days'].to_i > 0
errors.add(
:base,
"Please provide 'expected_receive_period_in_days' to indicate how many days can pass before this Agent is considered to be not working"
)
end
event_limit =
if value = options['event_limit'].presence
begin
Integer(value)
rescue StandardError
false
end
end
if event_limit == false && date_limit.blank?
errors.add(:base, "Event limit must be an integer that is less than 1001 or an integer plus a valid unit.")
elsif event_limit && event_limit > 1000
errors.add(:base, "For performance reasons, you cannot have an event limit greater than 1000.")
end
end
def receive(incoming_events)
return unless ['merge events', 'last event in'].include?(mode)
memory['last_event'] ||= {}
incoming_events.each do |event|
memory['last_event'] =
case mode
when 'merge events'
memory['last_event'].merge(event.payload)
else
event.payload
end
end
update_last_modified_at
end
def receive_web_request(request)
if valid_authentication?(request.params)
if request.headers['If-None-Match'].presence&.include?(etag)
[nil, 304, {}]
else
[liquified_content, 200, mime_type, response_headers]
end
else
[unauthorized_content(request.format.to_s), 401]
end
end
private
def mode
options['mode'].to_s.downcase
end
def unauthorized_content(format)
if format =~ /json/
{ error: "Not Authorized" }
else
"Not Authorized"
end
end
def valid_authentication?(params)
interpolated['secret'] == params['secret']
end
def mime_type
options['mime_type'].presence || 'text/html'
end
def liquified_content
content = interpolated(data_for_liquid_template)['content']
content.gsub!(/\r(?=\n)/, '') if boolify(options['line_break_is_lf'])
content
end
def data_for_liquid_template
case mode
when 'last x events'
events = received_events
events = events.where('events.created_at > ?', date_limit) if date_limit
events = events.limit count_limit
events = events.to_a.map { |x| x.payload }
{ 'events' => events }
else
memory['last_event'] || {}
end
end
public def etag
memory['etag'] || '"0.000000000"'
end
def last_modified_at
memory['last_modified_at']&.to_time || Time.at(0)
end
def last_modified_at=(time)
memory['last_modified_at'] = time.iso8601(9)
memory['etag'] = time.strftime('"%s.%9N"')
end
def update_last_modified_at
self.last_modified_at = Time.now
end
def max_age
options['expected_receive_period_in_days'].to_i * 86400
end
def response_headers
{
'Last-Modified' => last_modified_at.httpdate,
'ETag' => etag,
'Cache-Control' => "max-age=#{max_age}",
}.update(interpolated['response_headers'].presence || {})
end
def count_limit
[Integer(options['event_limit']), 1000].min
rescue StandardError
1000
end
def date_limit
return nil unless options['event_limit'].to_s.include?(' ')
value, unit = options['event_limit'].split(' ')
value = begin
Integer(value)
rescue StandardError
nil
end
return nil unless value
unit = unit.to_s.downcase
return nil unless DATE_UNITS.include?(unit)
value.send(unit.to_sym).ago
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/twitter_publish_agent.rb | app/models/agents/twitter_publish_agent.rb | module Agents
class TwitterPublishAgent < Agent
include TwitterConcern
cannot_be_scheduled!
description <<~MD
The Twitter Publish Agent publishes tweets from the events it receives.
#{twitter_dependencies_missing if dependencies_missing?}
To be able to use this Agent you need to authenticate with Twitter in the [Services](/services) section first.
You must also specify a `message` parameter, you can use [Liquid](https://github.com/huginn/huginn/wiki/Formatting-Events-using-Liquid) to format the message.
Additional parameters can be passed via `parameters`.
Set `expected_update_period_in_days` to the maximum amount of time that you'd expect to pass between Events being created by this Agent.
If `output_mode` is set to `merge`, the emitted Event will be merged into the original contents of the received Event.
MD
event_description <<~MD
Events look like this:
{
"success": true,
"published_tweet": "...",
"tweet_id": ...,
"tweet_url": "...",
"agent_id": ...,
"event_id": ...
}
{
"success": false,
"error": "...",
"failed_tweet": "...",
"agent_id": ...,
"event_id": ...
}
Original event contents will be merged when `output_mode` is set to `merge`.
MD
def validate_options
errors.add(:base,
"expected_update_period_in_days is required") unless options['expected_update_period_in_days'].present?
if options['output_mode'].present? && !options['output_mode'].to_s.include?('{') && !%(clean merge).include?(options['output_mode'].to_s)
errors.add(:base, "if provided, output_mode must be 'clean' or 'merge'")
end
end
def working?
event_created_within?(interpolated['expected_update_period_in_days']) && most_recent_event && most_recent_event.payload['success'] == true && !recent_error_logs?
end
def default_options
{
'expected_update_period_in_days' => "10",
'message' => "{{text}}",
'parameters' => {},
'output_mode' => 'clean'
}
end
def receive(incoming_events)
# if there are too many, dump a bunch to avoid getting rate limited
if incoming_events.count > 20
incoming_events = incoming_events.first(20)
end
incoming_events.each do |event|
tweet_text, parameters = interpolated(event).values_at('message', 'parameters')
new_event = interpolated['output_mode'].to_s == 'merge' ? event.payload.dup : {}
begin
tweet = publish_tweet(tweet_text, parameters.presence || {})
rescue Twitter::Error => e
new_event.update(
'success' => false,
'error' => e.message,
'failed_tweet' => tweet_text,
'agent_id' => event.agent_id,
'event_id' => event.id
)
else
new_event.update(
'success' => true,
'published_tweet' => tweet_text,
'tweet_id' => tweet.id,
'tweet_url' => tweet.url,
'agent_id' => event.agent_id,
'event_id' => event.id
)
end
create_event payload: new_event
end
end
def publish_tweet(text, parameters = {})
twitter.update(text, parameters)
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/tumblr_publish_agent.rb | app/models/agents/tumblr_publish_agent.rb | module Agents
class TumblrPublishAgent < Agent
include TumblrConcern
cannot_be_scheduled!
gem_dependency_check { defined?(Tumblr::Client) }
description <<~MD
The Tumblr Publish Agent publishes Tumblr posts from the events it receives.
#{'## Include `tumblr_client` and `omniauth-tumblr` in your Gemfile to use this Agent!' if dependencies_missing?}
To be able to use this Agent you need to authenticate with Tumblr in the [Services](/services) section first.
**Required fields:**
`blog_name` Your Tumblr URL (e.g. "mustardhamsters.tumblr.com")
`post_type` One of [text, photo, quote, link, chat, audio, video, reblog]
-------------
You may leave any of the following optional fields blank. Including a field not allowed for the specified `post_type` will cause a failure.
**Any post type**
* `state` published, draft, queue, private
* `tags` Comma-separated tags for this post
* `tweet` off, text for tweet
* `date` GMT date and time of the post as a string
* `format` html, markdown
* `slug` short text summary at end of the post URL
**Text** `title` `body`
**Photo** `caption` `link` `source`
**Quote** `quote` `source`
**Link** `title` `url` `description`
**Chat** `title` `conversation`
**Audio** `caption` `external_url`
**Video** `caption` `embed`
**Reblog** `id` `reblog_key` `comment`
-------------
[Full information on field options](https://www.tumblr.com/docs/en/api/v2#posting)
Set `expected_update_period_in_days` to the maximum amount of time that you'd expect to pass between Events being created by this Agent.
MD
def validate_options
errors.add(:base,
"expected_update_period_in_days is required") unless options['expected_update_period_in_days'].present?
end
def working?
event_created_within?(interpolated['expected_update_period_in_days']) && most_recent_event && most_recent_event.payload['success'] == true && !recent_error_logs?
end
def default_options
{
'expected_update_period_in_days' => "10",
'blog_name' => "{{blog_name}}",
'post_type' => "{{post_type}}",
'options' => {
'state' => "{{state}}",
'tags' => "{{tags}}",
'tweet' => "{{tweet}}",
'date' => "{{date}}",
'format' => "{{format}}",
'slug' => "{{slug}}",
'title' => "{{title}}",
'body' => "{{body}}",
'caption' => "{{caption}}",
'link' => "{{link}}",
'source' => "{{source}}",
'quote' => "{{quote}}",
'url' => "{{url}}",
'description' => "{{description}}",
'conversation' => "{{conversation}}",
'external_url' => "{{external_url}}",
'embed' => "{{embed}}",
'id' => "{{id}}",
'reblog_key' => "{{reblog_key}}",
'comment' => "{{comment}}",
},
}
end
def receive(incoming_events)
# if there are too many, dump a bunch to avoid getting rate limited
if incoming_events.count > 20
incoming_events = incoming_events.first(20)
end
incoming_events.each do |event|
blog_name = interpolated(event)['blog_name']
post_type = interpolated(event)['post_type']
options = interpolated(event)['options']
begin
post = publish_post(blog_name, post_type, options)
if !post.has_key?('id')
log("Failed to create #{post_type} post on #{blog_name}: #{post.to_json}, options: #{options.to_json}")
return
end
expanded_post = get_post(blog_name, post["id"])
create_event payload: {
'success' => true,
'published_post' => "[" + blog_name + "] " + post_type,
'post_id' => post["id"],
'agent_id' => event.agent_id,
'event_id' => event.id,
'post' => expanded_post
}
end
end
end
def publish_post(blog_name, post_type, options)
options_obj = {
state: options['state'],
tags: options['tags'],
tweet: options['tweet'],
date: options['date'],
format: options['format'],
slug: options['slug'],
}
case post_type
when "text"
options_obj[:title] = options['title']
options_obj[:body] = options['body']
tumblr.text(blog_name, options_obj)
when "photo"
options_obj[:caption] = options['caption']
options_obj[:link] = options['link']
options_obj[:source] = options['source']
tumblr.photo(blog_name, options_obj)
when "quote"
options_obj[:quote] = options['quote']
options_obj[:source] = options['source']
tumblr.quote(blog_name, options_obj)
when "link"
options_obj[:title] = options['title']
options_obj[:url] = options['url']
options_obj[:description] = options['description']
tumblr.link(blog_name, options_obj)
when "chat"
options_obj[:title] = options['title']
options_obj[:conversation] = options['conversation']
tumblr.chat(blog_name, options_obj)
when "audio"
options_obj[:caption] = options['caption']
options_obj[:external_url] = options['external_url']
tumblr.audio(blog_name, options_obj)
when "video"
options_obj[:caption] = options['caption']
options_obj[:embed] = options['embed']
tumblr.video(blog_name, options_obj)
when "reblog"
options_obj[:id] = options['id']
options_obj[:reblog_key] = options['reblog_key']
options_obj[:comment] = options['comment']
tumblr.reblog(blog_name, options_obj)
end
end
def get_post(blog_name, id)
obj = tumblr.posts(blog_name, { id: })
obj["posts"].first
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/jira_agent.rb | app/models/agents/jira_agent.rb | #!/usr/bin/env ruby
require 'cgi'
require 'httparty'
require 'date'
module Agents
class JiraAgent < Agent
include WebRequestConcern
cannot_receive_events!
description <<~MD
The Jira Agent subscribes to Jira issue updates.
- `jira_url` specifies the full URL of the jira installation, including https://
- `jql` is an optional Jira Query Language-based filter to limit the flow of events. See [JQL Docs](https://confluence.atlassian.com/display/JIRA/Advanced+Searching) for details.#{' '}
- `username` and `password` are optional, and may need to be specified if your Jira instance is read-protected
- `timeout` is an optional parameter that specifies how long the request processing may take in minutes.
The agent does periodic queries and emits the events containing the updated issues in JSON format.
NOTE: upon the first execution, the agent will fetch everything available by the JQL query. So if it's not desirable, limit the `jql` query by date.
MD
event_description <<~MD
Events are the raw JSON generated by Jira REST API
{
"expand": "editmeta,renderedFields,transitions,changelog,operations",
"id": "80127",
"self": "https://jira.atlassian.com/rest/api/2/issue/80127",
"key": "BAM-3512",
"fields": {
...
}
}
MD
default_schedule "every_10m"
MAX_EMPTY_REQUESTS = 10
def default_options
{
'username' => '',
'password' => '',
'jira_url' => 'https://jira.atlassian.com',
'jql' => '',
'expected_update_period_in_days' => '7',
'timeout' => '1'
}
end
def validate_options
errors.add(:base,
"you need to specify password if user name is set") if options['username'].present? and !options['password'].present?
errors.add(:base, "you need to specify your jira URL") unless options['jira_url'].present?
errors.add(:base,
"you need to specify the expected update period") unless options['expected_update_period_in_days'].present?
errors.add(:base, "you need to specify request timeout") unless options['timeout'].present?
end
def working?
event_created_within?(interpolated['expected_update_period_in_days']) && !recent_error_logs?
end
def check
last_run = nil
current_run = Time.now.utc.iso8601
last_run = Time.parse(memory[:last_run]) if memory[:last_run]
issues = get_issues(last_run)
issues.each do |issue|
updated = Time.parse(issue['fields']['updated'])
# this check is more precise than in get_issues()
# see get_issues() for explanation
if !last_run or updated > last_run
create_event payload: issue
end
end
memory[:last_run] = current_run
end
private
def request_url(jql, start_at)
"#{interpolated[:jira_url]}/rest/api/2/search?jql=#{CGI.escape(jql)}&fields=*all&startAt=#{start_at}"
end
def request_options
ropts = { headers: { "User-Agent" => user_agent } }
if !interpolated[:username].empty?
ropts = ropts.merge({
basic_auth: {
username: interpolated[:username],
password: interpolated[:password]
}
})
end
ropts
end
def get(url, options)
response = HTTParty.get(url, options)
case response.code
when 200
# OK
when 400
raise "Jira error: #{response['errorMessages']}"
when 403
raise "Authentication failed: Forbidden (403)"
else
raise "Request failed: #{response}"
end
response
end
def get_issues(since)
startAt = 0
issues = []
# JQL doesn't have an ability to specify timezones
# Because of this we have to fetch issues 24 h
# earlier and filter out unnecessary ones at a later
# stage. Fortunately, the 'updated' field has GMT
# offset
since -= 24 * 60 * 60 if since
jql = ""
if !interpolated[:jql].empty? && since
jql = "(#{interpolated[:jql]}) and updated >= '#{since.strftime('%Y-%m-%d %H:%M')}'"
else
jql = interpolated[:jql] if !interpolated[:jql].empty?
jql = "updated >= '#{since.strftime('%Y-%m-%d %H:%M')}'" if since
end
start_time = Time.now
request_limit = 0
loop do
response = get(request_url(jql, startAt), request_options)
if response['issues'].length == 0
request_limit += 1
end
if request_limit > MAX_EMPTY_REQUESTS
raise "There is no progress while fetching issues"
end
if Time.now > start_time + interpolated['timeout'].to_i * 60
raise "Timeout exceeded while fetching issues"
end
issues += response['issues']
startAt += response['issues'].length
break if startAt >= response['total']
end
issues
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/twitter_action_agent.rb | app/models/agents/twitter_action_agent.rb | module Agents
class TwitterActionAgent < Agent
include TwitterConcern
cannot_be_scheduled!
description <<~MD
The Twitter Action Agent is able to retweet or favorite tweets from the events it receives.
#{twitter_dependencies_missing if dependencies_missing?}
It expects to consume events generated by twitter agents where the payload is a hash of tweet information. The existing TwitterStreamAgent is one example of a valid event producer for this Agent.
To be able to use this Agent you need to authenticate with Twitter in the [Services](/services) section first.
Set `expected_receive_period_in_days` to the maximum amount of time that you'd expect to pass between Events being received by this Agent.
Set `retweet` to either true or false.
Set `favorite` to either true or false.
Set `emit_error_events` to true to emit an Event when the action failed, otherwise the action will be retried.
MD
def validate_options
unless options['expected_receive_period_in_days'].present?
errors.add(:base, "expected_receive_period_in_days is required")
end
unless retweet? || favorite?
errors.add(:base, "at least one action must be true")
end
if emit_error_events?.nil?
errors.add(:base, "emit_error_events must be set to 'true' or 'false'")
end
end
def working?
last_receive_at && last_receive_at > interpolated['expected_receive_period_in_days'].to_i.days.ago && !recent_error_logs?
end
def default_options
{
'expected_receive_period_in_days' => '2',
'favorite' => 'false',
'retweet' => 'true',
'emit_error_events' => 'false'
}
end
def retweet?
boolify(options['retweet'])
end
def favorite?
boolify(options['favorite'])
end
def emit_error_events?
boolify(options['emit_error_events'])
end
def receive(incoming_events)
tweets = tweets_from_events(incoming_events)
begin
twitter.favorite(tweets) if favorite?
twitter.retweet(tweets) if retweet?
rescue Twitter::Error => e
case e
when Twitter::Error::AlreadyRetweeted, Twitter::Error::AlreadyFavorited
error e.message
else
raise e unless emit_error_events?
end
if emit_error_events?
create_event payload: {
'success' => false,
'error' => e.message,
'tweets' => Hash[tweets.map { |t| [t.id, t.text] }],
'agent_ids' => incoming_events.map(&:agent_id),
'event_ids' => incoming_events.map(&:id)
}
end
end
end
def tweets_from_events(events)
events.map do |e|
Twitter::Tweet.new(
id: e.payload["id"],
text: e.payload["expanded_text"] || e.payload["full_text"] || e.payload["text"]
)
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/twitter_favorites.rb | app/models/agents/twitter_favorites.rb | module Agents
class TwitterFavorites < Agent
include TwitterConcern
can_dry_run!
cannot_receive_events!
description <<~MD
The Twitter Favorites List Agent follows the favorites list of a specified Twitter user.
#{twitter_dependencies_missing if dependencies_missing?}
To be able to use this Agent you need to authenticate with Twitter in the [Services](/services) section first.
You must also provide the `username` of the Twitter user, `number` of latest tweets to monitor and `history' as number of tweets that will be held in memory.
Set `expected_update_period_in_days` to the maximum amount of time that you'd expect to pass between Events being created by this Agent.
Set `starting_at` to the date/time (eg. `Mon Jun 02 00:38:12 +0000 2014`) you want to start receiving tweets from (default: agent's `created_at`)
MD
event_description <<~MD
Events are the raw JSON provided by the [Twitter API v1.1](https://dev.twitter.com/docs/api/1.1/get/favorites/list) with slight modifications. They should look something like this:
#{tweet_event_description('full_text')}
MD
default_schedule "every_1h"
def working?
event_created_within?(interpolated['expected_update_period_in_days']) && !recent_error_logs?
end
def default_options
{
'username' => 'tectonic',
'number' => '10',
'history' => '100',
'expected_update_period_in_days' => '2'
}
end
def validate_options
errors.add(:base, "username is required") unless options[:username].present?
errors.add(:base, "number is required") unless options[:number].present?
errors.add(:base, "history is required") unless options[:history].present?
errors.add(
:base,
"expected_update_period_in_days is required"
) unless options[:expected_update_period_in_days].present?
if options[:starting_at].present?
begin
Time.parse(options[:starting_at])
rescue StandardError
errors.add(:base, "Error parsing starting_at")
end
end
end
def starting_at
if interpolated[:starting_at].present?
begin
Time.parse(interpolated[:starting_at])
rescue StandardError
end
end || created_at || Time.now # for dry-running
end
def check
opts = { count: interpolated[:number], tweet_mode: 'extended' }
tweets = twitter.favorites(interpolated[:username], opts)
memory[:last_seen] ||= []
tweets.sort_by(&:id).each do |tweet|
next if memory[:last_seen].include?(tweet.id) || tweet.created_at < starting_at
memory[:last_seen].push(tweet.id)
memory[:last_seen].shift if memory[:last_seen].length > interpolated[:history].to_i
create_event(payload: format_tweet(tweet))
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/website_agent.rb | app/models/agents/website_agent.rb | require 'nokogiri'
require 'date'
module Agents
class WebsiteAgent < Agent
include WebRequestConcern
can_dry_run!
can_order_created_events!
no_bulk_receive!
default_schedule "every_12h"
UNIQUENESS_LOOK_BACK = 200
UNIQUENESS_FACTOR = 3
description <<~MD
The Website Agent scrapes a website, XML document, or JSON feed and creates Events based on the results.
Specify a `url` and select a `mode` for when to create Events based on the scraped data, either `all`, `on_change`, or `merge` (if fetching based on an Event, see below).
The `url` option can be a single url, or an array of urls (for example, for multiple pages with the exact same structure but different content to scrape).
The WebsiteAgent can also scrape based on incoming events.
* Set the `url_from_event` option to a [Liquid](https://github.com/huginn/huginn/wiki/Formatting-Events-using-Liquid) template to generate the url to access based on the Event. (To fetch the url in the Event's `url` key, for example, set `url_from_event` to `{{ url }}`.)
* Alternatively, set `data_from_event` to a [Liquid](https://github.com/huginn/huginn/wiki/Formatting-Events-using-Liquid) template to use data directly without fetching any URL. (For example, set it to `{{ html }}` to use HTML contained in the `html` key of the incoming Event.)
* If you specify `merge` for the `mode` option, Huginn will retain the old payload and update it with new values.
# Supported Document Types
The `type` value can be `xml`, `html`, `json`, or `text`.
To tell the Agent how to parse the content, specify `extract` as a hash with keys naming the extractions and values of hashes.
Note that for all of the formats, whatever you extract MUST have the same number of matches for each extractor except when it has `repeat` set to true. E.g., if you're extracting rows, all extractors must match all rows. For generating CSS selectors, something like [SelectorGadget](http://selectorgadget.com) may be helpful.
For extractors with `hidden` set to true, they will be excluded from the payloads of events created by the Agent, but can be used and interpolated in the `template` option explained below.
For extractors with `repeat` set to true, their first matches will be included in all extracts. This is useful such as when you want to include the title of a page in all events created from the page.
# Scraping HTML and XML
When parsing HTML or XML, these sub-hashes specify how each extraction should be done. The Agent first selects a node set from the document for each extraction key by evaluating either a CSS selector in `css` or an XPath expression in `xpath`. It then evaluates an XPath expression in `value` (default: `.`) on each node in the node set, converting the result into a string. Here's an example:
"extract": {
"url": { "css": "#comic img", "value": "@src" },
"title": { "css": "#comic img", "value": "@title" },
"body_text": { "css": "div.main", "value": "string(.)" },
"page_title": { "css": "title", "value": "string(.)", "repeat": true }
}
or
"extract": {
"url": { "xpath": "//*[@class="blog-item"]/a/@href", "value": "."
"title": { "xpath": "//*[@class="blog-item"]/a", "value": "normalize-space(.)" },
"description": { "xpath": "//*[@class="blog-item"]/div[0]", "value": "string(.)" }
}
"@_attr_" is the XPath expression to extract the value of an attribute named _attr_ from a node (such as "@href" from a hyperlink), and `string(.)` gives a string with all the enclosed text nodes concatenated without entity escaping (such as `&`). To extract the innerHTML, use `./node()`; and to extract the outer HTML, use `.`.
You can also use [XPath functions](https://www.w3.org/TR/xpath/#section-String-Functions) like `normalize-space` to strip and squeeze whitespace, `substring-after` to extract part of a text, and `translate` to remove commas from formatted numbers, etc. Instead of passing `string(.)` to these functions, you can just pass `.` like `normalize-space(.)` and `translate(., ',', '')`.
Beware that when parsing an XML document (i.e. `type` is `xml`) using `xpath` expressions, all namespaces are stripped from the document unless the top-level option `use_namespaces` is set to `true`.
For extraction with `raw` set to true, each value will be returned as is without any conversion instead of stringifying them. This is useful when you want to extract a number, a boolean value, or an array of strings.
For extraction with `single_array` set to true, all matches will be extracted into an array. This is useful when extracting list elements or multiple parts of a website that can only be matched with the same selector.
# Scraping JSON
When parsing JSON, these sub-hashes specify [JSONPaths](http://goessner.net/articles/JsonPath/) to the values that you care about.
Sample incoming event:
{ "results": {
"data": [
{
"title": "Lorem ipsum 1",
"description": "Aliquam pharetra leo ipsum."
"price": 8.95
},
{
"title": "Lorem ipsum 2",
"description": "Suspendisse a pulvinar lacus."
"price": 12.99
},
{
"title": "Lorem ipsum 3",
"description": "Praesent ac arcu tellus."
"price": 8.99
}
]
}
}
Sample rule:
"extract": {
"title": { "path": "results.data[*].title" },
"description": { "path": "results.data[*].description" }
}
In this example the `*` wildcard character makes the parser to iterate through all items of the `data` array. Three events will be created as a result.
Sample outgoing events:
[
{
"title": "Lorem ipsum 1",
"description": "Aliquam pharetra leo ipsum."
},
{
"title": "Lorem ipsum 2",
"description": "Suspendisse a pulvinar lacus."
},
{
"title": "Lorem ipsum 3",
"description": "Praesent ac arcu tellus."
}
]
The `extract` option can be skipped for the JSON type, causing the full JSON response to be returned.
# Scraping Text
When parsing text, each sub-hash should contain a `regexp` and `index`. Output text is matched against the regular expression repeatedly from the beginning through to the end, collecting a captured group specified by `index` in each match. Each index should be either an integer or a string name which corresponds to <code>(?<<em>name</em>>...)</code>. For example, to parse lines of <code><em>word</em>: <em>definition</em></code>, the following should work:
"extract": {
"word": { "regexp": "^(.+?): (.+)$", "index": 1 },
"definition": { "regexp": "^(.+?): (.+)$", "index": 2 }
}
Or if you prefer names to numbers for index:
"extract": {
"word": { "regexp": "^(?<word>.+?): (?<definition>.+)$", "index": "word" },
"definition": { "regexp": "^(?<word>.+?): (?<definition>.+)$", "index": "definition" }
}
To extract the whole content as one event:
"extract": {
"content": { "regexp": "\\A(?m:.)*\\z", "index": 0 }
}
Beware that `.` does not match the newline character (LF) unless the `m` flag is in effect, and `^`/`$` basically match every line beginning/end. See [this document](http://ruby-doc.org/core-#{RUBY_VERSION}/doc/regexp_rdoc.html) to learn the regular expression variant used in this service.
# General Options
Can be configured to use HTTP basic auth by including the `basic_auth` parameter with `"username:password"`, or `["username", "password"]`.
Set `expected_update_period_in_days` to the maximum amount of time that you'd expect to pass between Events being created by this Agent. This is only used to set the "working" status.
Set `uniqueness_look_back` to limit the number of events checked for uniqueness (typically for performance). This defaults to the larger of #{UNIQUENESS_LOOK_BACK} or #{UNIQUENESS_FACTOR}x the number of detected received results.
Set `force_encoding` to an encoding name (such as `UTF-8` and `ISO-8859-1`) if the website is known to respond with a missing, invalid, or wrong charset in the Content-Type header. Below are the steps used by Huginn to detect the encoding of fetched content:
1. If `force_encoding` is given, that value is used.
2. If the Content-Type header contains a charset parameter, that value is used.
3. When `type` is `html` or `xml`, Huginn checks for the presence of a BOM, XML declaration with attribute "encoding", or an HTML meta tag with charset information, and uses that if found.
4. Huginn falls back to UTF-8 (not ISO-8859-1).
Set `user_agent` to a custom User-Agent name if the website does not like the default value (`#{default_user_agent}`).
The `headers` field is optional. When present, it should be a hash of headers to send with the request.
Set `disable_ssl_verification` to `true` to disable ssl verification.
Set `unzip` to `gzip` to inflate the resource using gzip.
Set `http_success_codes` to an array of status codes (e.g., `[404, 422]`) to treat HTTP response codes beyond 200 as successes.
If a `template` option is given, its value must be a hash, whose key-value pairs are interpolated after extraction for each iteration and merged with the payload. In the template, keys of extracted data can be interpolated, and some additional variables are also available as explained in the next section. For example:
"template": {
"url": "{{ url | to_uri: _response_.url }}",
"description": "{{ body_text }}",
"last_modified": "{{ _response_.headers.Last-Modified | date: '%FT%T' }}"
}
In the `on_change` mode, change is detected based on the resulted event payload after applying this option. If you want to add some keys to each event but ignore any change in them, set `mode` to `all` and put a DeDuplicationAgent downstream.
# Liquid Templating
In [Liquid](https://github.com/huginn/huginn/wiki/Formatting-Events-using-Liquid) templating, the following variables are available:
* `_url_`: The URL specified to fetch the content from. When parsing `data_from_event`, this is not set.
* `_response_`: A response object with the following keys:
* `status`: HTTP status as integer. (Almost always 200) When parsing `data_from_event`, this is set to the value of the `status` key in the incoming Event, if it is a number or a string convertible to an integer.
* `headers`: Response headers; for example, `{{ _response_.headers.Content-Type }}` expands to the value of the Content-Type header. Keys are insensitive to cases and -/_. When parsing `data_from_event`, this is constructed from the value of the `headers` key in the incoming Event, if it is a hash.
* `url`: The final URL of the fetched page, following redirects. When parsing `data_from_event`, this is set to the value of the `url` key in the incoming Event. Using this in the `template` option, you can resolve relative URLs extracted from a document like `{{ link | to_uri: _response_.url }}` and `{{ content | rebase_hrefs: _response_.url }}`.
# Ordering Events
#{description_events_order}
MD
event_description do
if keys = event_keys
"Events will have the following fields:\n\n %s" % [
Utils.pretty_print(Hash[event_keys.map { |key|
[key, "..."]
}])
]
else
"Events will be the raw JSON returned by the URL."
end
end
def event_keys
extract = options['extract'] or return nil
extract.each_with_object([]) { |(key, value), keys|
keys << key unless boolify(value['hidden'])
} | (options['template'].presence.try!(:keys) || [])
end
def working?
event_created_within?(options['expected_update_period_in_days']) && !recent_error_logs?
end
def default_options
{
'expected_update_period_in_days' => "2",
'url' => "https://xkcd.com",
'type' => "html",
'mode' => "on_change",
'extract' => {
'url' => { 'css' => "#comic img", 'value' => "@src" },
'title' => { 'css' => "#comic img", 'value' => "@alt" },
'hovertext' => { 'css' => "#comic img", 'value' => "@title" }
}
}
end
def validate_options
# Check for required fields
errors.add(:base,
"either url, url_from_event, or data_from_event are required") unless options['url'].present? || options['url_from_event'].present? || options['data_from_event'].present?
errors.add(:base,
"expected_update_period_in_days is required") unless options['expected_update_period_in_days'].present?
validate_extract_options!
validate_template_options!
validate_http_success_codes!
# Check for optional fields
if options['mode'].present?
errors.add(:base, "mode must be set to on_change, all or merge") unless %w[on_change all
merge].include?(options['mode'])
end
if options['expected_update_period_in_days'].present?
errors.add(:base,
"Invalid expected_update_period_in_days format") unless is_positive_integer?(options['expected_update_period_in_days'])
end
if options['uniqueness_look_back'].present?
errors.add(:base,
"Invalid uniqueness_look_back format") unless is_positive_integer?(options['uniqueness_look_back'])
end
validate_web_request_options!
end
def validate_http_success_codes!
consider_success = options["http_success_codes"]
if consider_success.present?
if consider_success.class != Array
errors.add(:http_success_codes, "must be an array and specify at least one status code")
elsif consider_success.uniq.count != consider_success.count
errors.add(:http_success_codes, "duplicate http code found")
elsif consider_success.any? { |e| e.to_s !~ /^\d+$/ }
errors.add(:http_success_codes,
"please make sure to use only numeric values for code, ex 404, or \"404\"")
end
end
end
def validate_extract_options!
extraction_type = begin
extraction_type()
rescue StandardError
extraction_type(options)
end
case extract = options['extract']
when Hash
if extract.each_value.any? { |value| !value.is_a?(Hash) }
errors.add(:base, 'extract must be a hash of hashes.')
else
case extraction_type
when 'html', 'xml'
extract.each do |name, details|
details.each do |name,|
case name
when 'css', 'xpath', 'value', 'repeat', 'hidden', 'raw', 'single_array'
# ok
else
errors.add(:base, "Unknown key #{name.inspect} in extraction details")
end
end
case details['css']
when String
# ok
when nil
case details['xpath']
when String
# ok
when nil
errors.add(:base,
"When type is html or xml, all extractions must have a css or xpath attribute (bad extraction details for #{name.inspect})")
else
errors.add(:base, "Wrong type of \"xpath\" value in extraction details for #{name.inspect}")
end
else
errors.add(:base, "Wrong type of \"css\" value in extraction details for #{name.inspect}")
end
case details['value']
when String, nil
# ok
else
errors.add(:base, "Wrong type of \"value\" value in extraction details for #{name.inspect}")
end
end
when 'json'
extract.each do |name, details|
case details['path']
when String
# ok
when nil
errors.add(:base,
"When type is json, all extractions must have a path attribute (bad extraction details for #{name.inspect})")
else
errors.add(:base, "Wrong type of \"path\" value in extraction details for #{name.inspect}")
end
end
when 'text'
extract.each do |name, details|
case regexp = details['regexp']
when String
begin
re = Regexp.new(regexp)
rescue StandardError => e
errors.add(:base, "invalid regexp for #{name.inspect}: #{e.message}")
end
when nil
errors.add(:base,
"When type is text, all extractions must have a regexp attribute (bad extraction details for #{name.inspect})")
else
errors.add(:base, "Wrong type of \"regexp\" value in extraction details for #{name.inspect}")
end
case index = details['index']
when Integer, /\A\d+\z/
# ok
when String
if re && !re.names.include?(index)
errors.add(:base, "no named capture #{index.inspect} found in regexp for #{name.inspect})")
end
when nil
errors.add(:base,
"When type is text, all extractions must have an index attribute (bad extraction details for #{name.inspect})")
else
errors.add(:base, "Wrong type of \"index\" value in extraction details for #{name.inspect}")
end
end
when /\{/
# Liquid templating
else
errors.add(:base, "Unknown extraction type #{extraction_type.inspect}")
end
end
when nil
unless extraction_type == 'json'
errors.add(:base, 'extract is required for all types except json')
end
else
errors.add(:base, 'extract must be a hash')
end
end
def validate_template_options!
template = options['template'].presence or return
unless Hash === template && template.each_key.all?(String)
errors.add(:base, 'template must be a hash of strings.')
end
end
def check
check_urls(interpolated['url'])
end
def check_urls(in_url, existing_payload = {})
return unless in_url.present?
Array(in_url).each do |url|
check_url(url, existing_payload)
end
end
def check_url(url, existing_payload = {})
unless /\Ahttps?:\/\//i === url
error "Ignoring a non-HTTP url: #{url.inspect}"
return
end
uri = Utils.normalize_uri(url)
log "Fetching #{uri}"
response = faraday.get(uri)
raise "Failed: #{response.inspect}" unless consider_response_successful?(response)
interpolation_context.stack {
interpolation_context['_url_'] = uri.to_s
interpolation_context['_response_'] = ResponseDrop.new(response)
handle_data(response.body, response.env[:url], existing_payload)
}
rescue StandardError => e
error "Error when fetching url: #{e.message}\n#{e.backtrace.join("\n")}"
end
def default_encoding
case extraction_type
when 'html', 'xml'
# Let Nokogiri detect the encoding
nil
else
super
end
end
def handle_data(body, url, existing_payload)
# Beware, url may be a URI object, string or nil
doc = parse(body)
if extract_full_json?
if store_payload!(previous_payloads(1), doc)
log "Storing new result for '#{name}': #{doc.inspect}"
create_event payload: existing_payload.merge(doc)
end
return
end
output =
case extraction_type
when 'json'
extract_json(doc)
when 'text'
extract_text(doc)
else
extract_xml(doc)
end
num_tuples = output.size or
raise "At least one non-repeat key is required"
old_events = previous_payloads num_tuples
template = options['template'].presence
output.each do |extracted|
result = extracted.except(*output.hidden_keys)
if template
result.update(interpolate_options(template, extracted))
end
if store_payload!(old_events, result)
log "Storing new parsed result for '#{name}': #{result.inspect}"
create_event payload: existing_payload.merge(result)
end
end
end
def receive(incoming_events)
interpolate_with_each(incoming_events) do |event|
existing_payload = interpolated['mode'].to_s == "merge" ? event.payload : {}
if data_from_event = options['data_from_event'].presence
data = interpolate_options(data_from_event)
if data.present?
handle_event_data(data, event, existing_payload)
else
error "No data was found in the Event payload using the template #{data_from_event}", inbound_event: event
end
else
url_to_scrape =
if url_template = options['url_from_event'].presence
interpolate_options(url_template)
else
interpolated['url']
end
check_urls(url_to_scrape, existing_payload)
end
end
end
private
def consider_response_successful?(response)
response.success? || begin
consider_success = options["http_success_codes"]
consider_success.present? && (consider_success.include?(response.status.to_s) || consider_success.include?(response.status))
end
end
def handle_event_data(data, event, existing_payload)
interpolation_context.stack {
interpolation_context['_response_'] = ResponseFromEventDrop.new(event)
handle_data(data, event.payload['url'].presence, existing_payload)
}
rescue StandardError => e
error "Error when handling event data: #{e.message}\n#{e.backtrace.join("\n")}"
end
# This method returns true if the result should be stored as a new event.
# If mode is set to 'on_change', this method may return false and update an existing
# event to expire further in the future.
def store_payload!(old_events, result)
case interpolated['mode'].presence
when 'on_change'
result_json = result.to_json
if found = old_events.find { |event| event.payload.to_json == result_json }
found.update!(expires_at: new_event_expiration_date)
false
else
true
end
when 'all', 'merge', ''
true
else
raise "Illegal options[mode]: #{interpolated['mode']}"
end
end
def previous_payloads(num_events)
if interpolated['uniqueness_look_back'].present?
look_back = interpolated['uniqueness_look_back'].to_i
else
# Larger of UNIQUENESS_FACTOR * num_events and UNIQUENESS_LOOK_BACK
look_back = UNIQUENESS_FACTOR * num_events
if look_back < UNIQUENESS_LOOK_BACK
look_back = UNIQUENESS_LOOK_BACK
end
end
events.order("id desc").limit(look_back) if interpolated['mode'] == "on_change"
end
def extract_full_json?
!interpolated['extract'].present? && extraction_type == "json"
end
def extraction_type(interpolated = interpolated())
(interpolated['type'] || begin
case interpolated['url']
when /\.(rss|xml)$/i
"xml"
when /\.json$/i
"json"
when /\.(txt|text)$/i
"text"
else
"html"
end
end).to_s
end
def use_namespaces?
if interpolated.key?('use_namespaces')
boolify(interpolated['use_namespaces'])
else
interpolated['extract'].none? { |_name, extraction_details|
extraction_details.key?('xpath')
}
end
end
def extract_each(&block)
interpolated['extract'].each_with_object(Output.new) { |(name, extraction_details), output|
if boolify(extraction_details['repeat'])
values = Repeater.new { |repeater|
block.call(extraction_details, repeater)
}
else
values = []
block.call(extraction_details, values)
end
log "Values extracted: #{values}"
begin
output[name] = values
rescue UnevenSizeError
raise "Got an uneven number of matches for #{interpolated['name']}: #{interpolated['extract'].inspect}"
else
output.hidden_keys << name if boolify(extraction_details['hidden'])
end
}
end
def extract_json(doc)
extract_each { |extraction_details, values|
log "Extracting #{extraction_type} at #{extraction_details['path']}"
Utils.values_at(doc, extraction_details['path']).each { |value|
values << value
}
}
end
def extract_text(doc)
extract_each { |extraction_details, values|
regexp = Regexp.new(extraction_details['regexp'])
log "Extracting #{extraction_type} with #{regexp}"
case index = extraction_details['index']
when /\A\d+\z/
index = index.to_i
end
doc.scan(regexp) {
values << Regexp.last_match[index]
}
}
end
def extract_xml(doc)
extract_each { |extraction_details, values|
case
when css = extraction_details['css']
nodes = doc.css(css)
when xpath = extraction_details['xpath']
nodes = doc.xpath(xpath)
else
raise '"css" or "xpath" is required for HTML or XML extraction'
end
log "Extracting #{extraction_type} at #{xpath || css}"
expr = extraction_details['value'] || '.'
handle_float = ->(value) {
case
when value.nan?
'NaN'
when value.infinite?
if value > 0
'Infinity'
else
'-Infinity'
end
when value.to_i == value
# Node#xpath() returns any numeric value as float;
# convert it to integer as appropriate.
value.to_i
else
value
end
}
jsonify =
if boolify(extraction_details['raw'])
->(value) {
case value
when nil, true, false, String, Integer
value
when Float
handle_float.call(value)
when Nokogiri::XML::NodeSet
value.map(&jsonify)
else
value.to_s
end
}
else
->(value) {
case value
when Float
handle_float.call(value).to_s
else
value.to_s
end
}
end
case nodes
when Nokogiri::XML::NodeSet
node_values = nodes.map { |node|
jsonify.call(node.xpath(expr))
}
if boolify(extraction_details['single_array'])
values << node_values
else
node_values.each { |value| values << value }
end
else
raise "The result of HTML/XML extraction was not a NodeSet"
end
}
end
def parse(data)
case type = extraction_type
when "xml"
doc = Nokogiri::XML(data)
# ignore xmlns, useful when parsing atom feeds
doc.remove_namespaces! unless use_namespaces?
doc
when "json"
JSON.parse(data)
when "html"
Nokogiri::HTML(data)
when "text"
data
else
raise "Unknown extraction type: #{type}"
end
end
class UnevenSizeError < ArgumentError
end
class Output
def initialize
@hash = {}
@size = nil
@hidden_keys = []
end
attr_reader :size
attr_reader :hidden_keys
def []=(key, value)
case size = value.size
when Integer
if @size && @size != size
raise UnevenSizeError, 'got an uneven size'
end
@size = size
end
@hash[key] = value
end
def each
@size.times.zip(*@hash.values) do |_index, *values|
yield @hash.each_key.lazy.zip(values).to_h
end
end
end
class Repeater < Enumerator
# Repeater.new { |y|
# # ...
# y << value
# } #=> [value, ...]
def initialize(&block)
@value = nil
super(Float::INFINITY) { |y|
loop { y << @value }
}
catch(@done = Object.new) {
block.call(self)
}
end
def <<(value)
@value = value
throw @done
end
def to_s
"[#{@value.inspect}, ...]"
end
end
# Wraps Faraday::Response
class ResponseDrop < LiquidDroppable::Drop
def headers
HeaderDrop.new(@object.headers)
end
# Integer value of HTTP status
def status
@object.status
end
# The URL
def url
@object.env.url.to_s
end
end
class ResponseFromEventDrop < LiquidDroppable::Drop
def headers
headers = begin
Faraday::Utils::Headers.from(@object.payload[:headers])
rescue StandardError
{}
end
HeaderDrop.new(headers)
end
# Integer value of HTTP status
def status
Integer(@object.payload[:status])
rescue StandardError
nil
end
# The URL
def url
@object.payload[:url]
end
end
# Wraps Faraday::Utils::Headers
class HeaderDrop < LiquidDroppable::Drop
def liquid_method_missing(name)
@object[name.tr('_', '-')]
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/evernote_agent.rb | app/models/agents/evernote_agent.rb | module Agents
class EvernoteAgent < Agent
include EvernoteConcern
description <<~MD
The Evernote Agent connects with a user's Evernote note store.
Visit [Evernote](https://dev.evernote.com/doc/) to set up an Evernote app and receive an api key and secret.
Store these in the Evernote environment variables in the .env file.
You will also need to create a [Sandbox](https://sandbox.evernote.com/Registration.action) account to use during development.
Next, you'll need to authenticate with Evernote in the [Services](/services) section.
Options:
* `mode` - Two possible values:
- `update` Based on events it receives, the agent will create notes
or update notes with the same `title` and `notebook`
- `read` On a schedule, it will generate events containing data for newly
added or updated notes
* `include_xhtml_content` - Set to `true` to include the content in ENML (Evernote Markup Language) of the note
* `note`
- When `mode` is `update` the parameters of `note` are the attributes of the note to be added/edited.
To edit a note, both `title` and `notebook` must be set.
For example, to add the tags 'comic' and 'CS' to a note titled 'xkcd Survey' in the notebook 'xkcd', use:
"notes": {
"title": "xkcd Survey",
"content": "",
"notebook": "xkcd",
"tagNames": "comic, CS"
}
If a note with the above title and notebook did note exist already, one would be created.
- When `mode` is `read` the values are search parameters.
Note: The `content` parameter is not used for searching. Setting `title` only filters
notes whose titles contain `title` as a substring, not as the exact title.
For example, to find all notes with tag 'CS' in the notebook 'xkcd', use:
"notes": {
"title": "",
"content": "",
"notebook": "xkcd",
"tagNames": "CS"
}
MD
event_description <<~MD
When `mode` is `update`, events look like:
{
"title": "...",
"content": "...",
"notebook": "...",
"tags": "...",
"source": "...",
"sourceURL": "..."
}
When `mode` is `read`, events look like:
{
"title": "...",
"content": "...",
"notebook": "...",
"tags": "...",
"source": "...",
"sourceURL": "...",
"resources" : [
{
"url": "resource1_url",
"name": "resource1_name",
"mime_type": "resource1_mime_type"
}
...
]
}
MD
default_schedule "never"
def working?
event_created_within?(interpolated['expected_update_period_in_days']) && !recent_error_logs?
end
def default_options
{
"expected_update_period_in_days" => "2",
"mode" => "update",
"include_xhtml_content" => "false",
"note" => {
"title" => "{{title}}",
"content" => "{{content}}",
"notebook" => "{{notebook}}",
"tagNames" => "{{tag1}}, {{tag2}}"
}
}
end
def validate_options
errors.add(:base, "mode must be 'update' or 'read'") unless %w[read update].include?(options[:mode])
if options[:mode] == "update" && schedule != "never"
errors.add(:base, "when mode is set to 'update', schedule must be 'never'")
end
if options[:mode] == "read" && schedule == "never"
errors.add(:base, "when mode is set to 'read', agent must have a schedule")
end
errors.add(:base,
"expected_update_period_in_days is required") unless options['expected_update_period_in_days'].present?
if options[:mode] == "update" && options[:note].values.all?(&:empty?)
errors.add(:base, "you must specify at least one note parameter to create or update a note")
end
end
def include_xhtml_content?
options[:include_xhtml_content] == "true"
end
def receive(incoming_events)
if options[:mode] == "update"
incoming_events.each do |event|
note = note_store.create_or_update_note(note_params(event))
create_event payload: note.attr(include_content: include_xhtml_content?)
end
end
end
def check
if options[:mode] == "read"
opts = note_params(options)
# convert time to evernote timestamp format:
# https://dev.evernote.com/doc/reference/Types.html#Typedef_Timestamp
opts.merge!(agent_created_at: created_at.to_i * 1000)
opts.merge!(last_checked_at: (memory[:last_checked_at] ||= created_at.to_i * 1000))
if opts[:tagNames]
notes_with_tags =
memory[:notes_with_tags] ||=
NoteStore::Search.new(note_store, { tagNames: opts[:tagNames] }).note_guids
opts.merge!(notes_with_tags:)
end
notes = NoteStore::Search.new(note_store, opts).notes
notes.each do |note|
memory[:notes_with_tags] << note.guid unless memory[:notes_with_tags].include?(note.guid)
create_event payload: note.attr(include_resources: true, include_content: include_xhtml_content?)
end
memory[:last_checked_at] = Time.now.to_i * 1000
end
end
private
def note_params(options)
params = interpolated(options)[:note]
errors.add(:base, "only one notebook allowed") unless params[:notebook].to_s.split(/\s*,\s*/) == 1
params[:tagNames] = params[:tagNames].to_s.split(/\s*,\s*/)
params[:title].strip!
params[:notebook].strip!
params
end
def evernote_note_store
evernote_client.note_store
end
def note_store
@note_store ||= NoteStore.new(evernote_note_store)
end
# wrapper for evernote api NoteStore
# https://dev.evernote.com/doc/reference/
class NoteStore
attr_reader :en_note_store
delegate :createNote, :updateNote, :getNote, :listNotebooks, :listTags, :getNotebook,
:createNotebook, :findNotesMetadata, :getNoteTagNames, to: :en_note_store
def initialize(en_note_store)
@en_note_store = en_note_store
end
def create_or_update_note(params)
search = Search.new(self, { title: params[:title], notebook: params[:notebook] })
# evernote search can only filter notes with titles containing a substring;
# this finds a note with the exact title
note = search.notes.detect { |note| note.title == params[:title] }
if note
# a note with specified title and notebook exists, so update it
update_note(params.merge(guid: note.guid, notebookGuid: note.notebookGuid))
else
# create the notebook unless it already exists
notebook = find_notebook(name: params[:notebook])
notebook_guid =
notebook ? notebook.guid : create_notebook(params[:notebook]).guid
create_note(params.merge(notebookGuid: notebook_guid))
end
end
def create_note(params)
note = Evernote::EDAM::Type::Note.new(with_wrapped_content(params))
en_note = createNote(note)
find_note(en_note.guid)
end
def update_note(params)
# do not empty note properties that have not been set in `params`
params.keys.each { |key| params.delete(key) unless params[key].present? }
params = with_wrapped_content(params)
# append specified tags instead of replacing current tags
# evernote will create any new tags
tags = getNoteTagNames(params[:guid])
tags.each { |tag|
params[:tagNames] << tag unless params[:tagNames].include?(tag)
}
note = Evernote::EDAM::Type::Note.new(params)
updateNote(note)
find_note(params[:guid])
end
def find_note(guid)
# https://dev.evernote.com/doc/reference/NoteStore.html#Fn_NoteStore_getNote
en_note = getNote(guid, true, false, false, false)
build_note(en_note)
end
def build_note(en_note)
notebook = find_notebook(guid: en_note.notebookGuid).try(:name)
tags = en_note.tagNames || find_tags(en_note.tagGuids.to_a).map(&:name)
Note.new(en_note, notebook, tags)
end
def find_tags(guids)
listTags.select { |tag| guids.include?(tag.guid) }
end
def find_notebook(params)
if params[:guid]
listNotebooks.detect { |notebook| notebook.guid == params[:guid] }
elsif params[:name]
listNotebooks.detect { |notebook| notebook.name == params[:name] }
end
end
def create_notebook(name)
notebook = Evernote::EDAM::Type::Notebook.new(name:)
createNotebook(notebook)
end
def with_wrapped_content(params)
params.delete(:notebook)
if params[:content]
params[:content] =
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>" \
"<!DOCTYPE en-note SYSTEM \"http://xml.evernote.com/pub/enml2.dtd\">" \
"<en-note>#{params[:content].encode(xml: :text)}</en-note>"
end
params
end
class Search
attr_reader :note_store, :opts
def initialize(note_store, opts)
@note_store = note_store
@opts = opts
end
def note_guids
filtered_metadata.map(&:guid)
end
def notes
metadata = filtered_metadata
if opts[:last_checked_at] && opts[:tagNames]
# evernote does note change Note#updated timestamp when a tag is added to a note
# the following selects recently updated notes
# and notes that recently had the specified tags added
metadata.select! do |note_data|
note_data.updated > opts[:last_checked_at] ||
!opts[:notes_with_tags].include?(note_data.guid)
end
elsif opts[:last_checked_at]
metadata.select! { |note_data| note_data.updated > opts[:last_checked_at] }
end
metadata.map! { |note_data| note_store.find_note(note_data.guid) }
metadata
end
def create_filter
filter = Evernote::EDAM::NoteStore::NoteFilter.new
# evernote search grammar:
# https://dev.evernote.com/doc/articles/search_grammar.php#Search_Terms
query_terms = []
query_terms << "notebook:\"#{opts[:notebook]}\"" if opts[:notebook].present?
query_terms << "intitle:\"#{opts[:title]}\"" if opts[:title].present?
query_terms << "updated:day-1" if opts[:last_checked_at].present?
opts[:tagNames].to_a.each { |tag| query_terms << "tag:#{tag}" }
filter.words = query_terms.join(" ")
filter
end
private
def filtered_metadata
filter = create_filter
spec = create_spec
metadata = note_store.findNotesMetadata(filter, 0, 100, spec).notes
end
def create_spec
Evernote::EDAM::NoteStore::NotesMetadataResultSpec.new(
includeTitle: true,
includeAttributes: true,
includeNotebookGuid: true,
includeTagGuids: true,
includeUpdated: true,
includeCreated: true
)
end
end
end
class Note
attr_accessor :en_note
attr_reader :notebook, :tags
delegate :guid, :notebookGuid, :title, :tagGuids, :content, :resources,
:attributes, to: :en_note
def initialize(en_note, notebook, tags)
@en_note = en_note
@notebook = notebook
@tags = tags
end
def attr(opts = {})
return_attr = {
title:,
notebook:,
tags:,
source: attributes.source,
source_url: attributes.sourceURL
}
return_attr[:content] = content if opts[:include_content]
if opts[:include_resources] && resources
return_attr[:resources] = []
resources.each do |resource|
return_attr[:resources] << {
url: resource.attributes.sourceURL,
name: resource.attributes.fileName,
mime_type: resource.mime
}
end
end
return_attr
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/de_duplication_agent.rb | app/models/agents/de_duplication_agent.rb | module Agents
class DeDuplicationAgent < Agent
include FormConfigurable
cannot_be_scheduled!
description <<~MD
The De-duplication Agent receives a stream of events and remits the event if it is not a duplicate.
`property` the value that should be used to determine the uniqueness of the event (empty to use the whole payload)
`lookback` amount of past Events to compare the value to (0 for unlimited)
`expected_update_period_in_days` is used to determine if the Agent is working.
MD
event_description <<~MD
The DeDuplicationAgent just reemits events it received.
MD
def default_options
{
'property' => '{{value}}',
'lookback' => 100,
'expected_update_period_in_days' => 1
}
end
form_configurable :property
form_configurable :lookback
form_configurable :expected_update_period_in_days
after_initialize :initialize_memory
def initialize_memory
memory['properties'] ||= []
end
def validate_options
unless options['lookback'].present? && options['expected_update_period_in_days'].present?
errors.add(:base, "The lookback and expected_update_period_in_days fields are all required.")
end
end
def working?
event_created_within?(interpolated['expected_update_period_in_days']) && !recent_error_logs?
end
def receive(incoming_events)
incoming_events.each do |event|
handle(interpolated(event), event)
end
end
private
def handle(opts, event = nil)
property = get_hash(options['property'].blank? ? JSON.dump(event.payload) : opts['property'])
if is_unique?(property)
outbound_event = create_event payload: event.payload
log(
"Propagating new event as '#{property}' is a new unique property.",
inbound_event: event,
outbound_event:
)
update_memory(property, opts['lookback'].to_i)
else
log("Not propagating as incoming event is a duplicate.", inbound_event: event)
end
end
def get_hash(property)
if property.to_s.length > 10
Zlib.crc32(property).to_s
else
property
end
end
def is_unique?(property)
!memory['properties'].include?(property)
end
def update_memory(property, amount)
if amount != 0 && memory['properties'].length == amount
memory['properties'].shift
end
memory['properties'].push(property)
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/dropbox_file_url_agent.rb | app/models/agents/dropbox_file_url_agent.rb | module Agents
class DropboxFileUrlAgent < Agent
include DropboxConcern
cannot_be_scheduled!
no_bulk_receive!
can_dry_run!
description <<~MD
The _DropboxFileUrlAgent_ is used to work with Dropbox. It takes a file path (or multiple files paths) and emits events with either [temporary links](https://www.dropbox.com/developers/core/docs#media) or [permanent links](https://www.dropbox.com/developers/core/docs#shares).
#{'## Include the `dropbox-api` and `omniauth-dropbox` gems in your `Gemfile` and set `DROPBOX_OAUTH_KEY` and `DROPBOX_OAUTH_SECRET` in your environment to use Dropbox Agents.' if dependencies_missing?}
The incoming event payload needs to have a `paths` key, with a comma-separated list of files you want the URL for. For example:
{
"paths": "first/path, second/path"
}
__TIP__: You can use the _Event Formatting Agent_ to format events before they come in. Here's an example configuration for formatting an event coming out of a _Dropbox Watch Agent_:
{
"instructions": {
"paths": "{{ added | map: 'path' | join: ',' }}"
},
"matchers": [],
"mode": "clean"
}
An example of usage would be to watch a specific Dropbox directory (with the _DropboxWatchAgent_) and get the URLs for the added or updated files. You could then, for example, send emails with those links.
Set `link_type` to `'temporary'` if you want temporary links, or to `'permanent'` for permanent ones.
MD
event_description do
"Events will looks like this:\n\n " +
Utils.pretty_print(
if options['link_type'] == 'permanent'
{
url: "https://www.dropbox.com/s/abcde3/example?dl=1",
".tag": "file",
id: "id:abcde3",
name: "hi",
path_lower: "/huginn/hi",
link_permissions: {
resolved_visibility: { ".tag": "public" },
requested_visibility: { ".tag": "public" },
can_revoke: true
},
client_modified: "2017-10-14T18:38:39Z",
server_modified: "2017-10-14T18:38:45Z",
rev: "31db0615354b",
size: 0
}
else
{
url: "https://dl.dropboxusercontent.com/apitl/1/somelongurl",
metadata: {
name: "hi",
path_lower: "/huginn/hi",
path_display: "/huginn/hi",
id: "id:abcde3",
client_modified: "2017-10-14T18:38:39Z",
server_modified: "2017-10-14T18:38:45Z",
rev: "31db0615354b",
size: 0,
content_hash: "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
}
}
end
)
end
def default_options
{
'link_type' => 'temporary'
}
end
def working?
!recent_error_logs?
end
def receive(events)
events.flat_map { |e| e.payload['paths'].split(',').map(&:strip) }
.each do |path|
create_event payload: (options['link_type'] == 'permanent' ? permanent_url_for(path) : temporary_url_for(path))
end
end
private
def temporary_url_for(path)
dropbox.find(path).direct_url.response.tap do |response|
response['url'] = response.delete('link')
end
end
def permanent_url_for(path)
dropbox.find(path).share_url.response.tap do |response|
response['url'].gsub!('?dl=0', '?dl=1')
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/pdf_info_agent.rb | app/models/agents/pdf_info_agent.rb | require 'open-uri'
require 'hypdf'
module Agents
class PdfInfoAgent < Agent
gem_dependency_check { defined?(HyPDF) }
cannot_be_scheduled!
no_bulk_receive!
description <<~MD
The PDF Info Agent returns the metadata contained within a given PDF file, using HyPDF.
#{'## Include the `hypdf` gem in your `Gemfile` to use PDFInfo Agents.' if dependencies_missing?}
In order for this agent to work, you need to have [HyPDF](https://devcenter.heroku.com/articles/hypdf) running and configured.
It works by acting on events that contain a key `url` in their payload, and runs the [pdfinfo](https://devcenter.heroku.com/articles/hypdf#pdfinfo) command on them.
MD
event_description do
"This will change based on the metadata in the pdf.\n\n " +
Utils.pretty_print({
"Title" => "Everyday Rails Testing with RSpec",
"Author" => "Aaron Sumner",
"Creator" => "LaTeX with hyperref package",
"Producer" => "xdvipdfmx (0.7.8)",
"CreationDate" => "Fri Aug 2 05",
"32" => "50 2013",
"Tagged" => "no",
"Pages" => "150",
"Encrypted" => "no",
"Page size" => "612 x 792 pts (letter)",
"Optimized" => "no",
"PDF version" => "1.5",
"url": "your url"
})
end
def working?
!recent_error_logs?
end
def default_options
{}
end
def receive(incoming_events)
incoming_events.each do |event|
interpolate_with(event) do
url_to_scrape = event.payload['url']
check_url(url_to_scrape, event.payload) if url_to_scrape =~ /^https?:\/\//i
end
end
end
def check_url(in_url, payload)
return unless in_url.present?
Array(in_url).each do |url|
log "Fetching #{url}"
info = HyPDF.pdfinfo(open(url))
create_event payload: info.merge(payload)
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/twitter_stream_agent.rb | app/models/agents/twitter_stream_agent.rb | module Agents
class TwitterStreamAgent < Agent
include TwitterConcern
include LongRunnable
cannot_receive_events!
description <<~MD
The Twitter Stream Agent follows the Twitter stream in real time, watching for certain keywords, or filters, that you provide.
#{twitter_dependencies_missing if dependencies_missing?}
To follow the Twitter stream, provide an array of `filters`. Multiple words in a filter must all show up in a tweet, but are independent of order.
If you provide an array instead of a filter, the first entry will be considered primary and any additional values will be treated as aliases.
To be able to use this Agent you need to authenticate with Twitter in the [Services](/services) section first.
Set `include_retweets` to `true` to not include retweets (default: `false`)
Set `expected_update_period_in_days` to the maximum amount of time that you'd expect to pass between Events being created by this Agent.
`generate` should be either `events` or `counts`. If set to `counts`, it will output event summaries whenever the Agent is scheduled.
MD
event_description <<~MD
When in `counts` mode, TwitterStreamAgent events look like:
{
"filter": "hello world",
"count": 25,
"time": 3456785456
}
When in `events` mode, TwitterStreamAgent events look like:
#{
tweet_event_description('text', <<~MD)
"filter": "selectorgadget",
MD
}
MD
default_schedule "11pm"
def validate_options
unless options[:filters].present? &&
options[:expected_update_period_in_days].present? &&
options[:generate].present?
errors.add(:base, "expected_update_period_in_days, generate, and filters are required fields")
end
if options[:include_retweets].present? && boolify(options[:include_retweets]).nil?
errors.add(:base, "include_retweets must be a boolean value")
end
end
def working?
event_created_within?(interpolated[:expected_update_period_in_days]) && !recent_error_logs?
end
def default_options
{
'filters' => %w[keyword1 keyword2],
'include_retweets' => false,
'expected_update_period_in_days' => "2",
'generate' => "events"
}
end
def process_tweet(filter, status)
filter = lookup_filter(filter)
if filter
if interpolated[:generate] == "counts"
# Avoid memory pollution by reloading the Agent.
agent = Agent.find(id)
agent.memory[:filter_counts] ||= {}
agent.memory[:filter_counts][filter] ||= 0
agent.memory[:filter_counts][filter] += 1
remove_unused_keys!(agent, 'filter_counts')
agent.save!
else
create_event payload: status.merge('filter' => filter)
end
end
end
def check
if interpolated[:generate] == "counts" && memory[:filter_counts].present?
memory[:filter_counts].each do |filter, count|
create_event payload: { 'filter' => filter, 'count' => count, 'time' => Time.now.to_i }
end
end
memory[:filter_counts] = {}
end
protected
def lookup_filter(filter)
interpolated[:filters].each do |known_filter|
if known_filter == filter
return filter
elsif known_filter.is_a?(Array)
if known_filter.include?(filter)
return known_filter.first
end
end
end
end
def remove_unused_keys!(agent, base)
if agent.memory[base]
(
agent.memory[base].keys - agent.interpolated[:filters].map { |f|
f.is_a?(Array) ? f.first.to_s : f.to_s
}
).each do |removed_key|
agent.memory[base].delete(removed_key)
end
end
end
def self.setup_worker
Agents::TwitterStreamAgent.active.order(:id).group_by { |agent|
agent.twitter_oauth_token
}.map do |oauth_token, agents|
if Agents::TwitterStreamAgent.dependencies_missing?
warn Agents::TwitterStreamAgent.twitter_dependencies_missing
STDERR.flush
return false
end
filter_to_agent_map =
agents.map { |agent|
agent.options[:filters]
}.flatten.uniq.compact.map(&:strip).each_with_object({}) { |f, m|
m[f] = []
}
agents.each do |agent|
agent.options[:filters].flatten.uniq.compact.map(&:strip).each do |filter|
filter_to_agent_map[filter] << agent
end
end
config_hash = filter_to_agent_map.map { |k, v| [k, v.map(&:id)] }
config_hash.push(oauth_token)
Worker.new(id: agents.first.worker_id(config_hash),
config: { filter_to_agent_map: },
agent: agents.first)
end
end
class Worker < LongRunnable::Worker
RELOAD_TIMEOUT = 60.minutes
DUPLICATE_DETECTION_LENGTH = 1000
SEPARATOR = /[^\w-]+/
def setup
require 'twitter/json_stream'
@filter_to_agent_map = @config[:filter_to_agent_map]
end
def run
@recent_tweets = []
EventMachine.run do
EventMachine.add_periodic_timer(RELOAD_TIMEOUT) do
restart!
end
stream!(@filter_to_agent_map.keys, @agent) do |status|
handle_status(status)
end
end
Thread.stop
end
def stop
EventMachine.stop_event_loop if EventMachine.reactor_running?
terminate_thread!
end
private
def stream!(filters, agent, &block)
track = filters.map(&:downcase).uniq.join(",")
path =
if track.present?
"/1.1/statuses/filter.json?#{{ track: }.to_param}"
else
"/1.1/statuses/sample.json"
end
stream = Twitter::JSONStream.connect(
path:,
ssl: true,
oauth: {
consumer_key: agent.twitter_consumer_key,
consumer_secret: agent.twitter_consumer_secret,
access_key: agent.twitter_oauth_token,
access_secret: agent.twitter_oauth_token_secret
}
)
stream.each_item(&block)
stream.on_error do |message|
warn " --> Twitter error: #{message} at #{Time.now} <--"
warn " --> Sleeping for 15 seconds"
sleep 15
restart!
end
stream.on_no_data do |_message|
warn " --> Got no data for awhile; trying to reconnect at #{Time.now} <--"
restart!
end
stream.on_max_reconnects do |_timeout, _retries|
warn " --> Oops, tried too many times! at #{Time.now} <--"
sleep 60
restart!
end
end
def handle_status(status)
status = JSON.parse(status, symbolize_names: true) if status.is_a?(String)
status = TwitterConcern.format_tweet(status)
return unless status && status[:text] && !status.has_key?(:delete)
if status[:retweeted_status] && !boolify(agent.options[:include_retweets])
return
elsif @recent_tweets.include?(status[:id_str])
puts "(#{Time.now}) Skipping duplicate tweet: #{status[:text]}"
return
end
@recent_tweets << status[:id_str]
@recent_tweets.shift if @recent_tweets.length > DUPLICATE_DETECTION_LENGTH
@filter_to_agent_map.keys.each do |filter|
next unless (filter.downcase.split(SEPARATOR) - status[:text].downcase.split(SEPARATOR)).reject(&:empty?) == [] # Hacky McHackerson
@filter_to_agent_map[filter].each do |agent|
puts "(#{Time.now}) #{agent.name} received: #{status[:text]}"
AgentRunner.with_connection do
agent.process_tweet(filter, status)
end
end
end
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/aftership_agent.rb | app/models/agents/aftership_agent.rb | require 'uri'
module Agents
class AftershipAgent < Agent
cannot_receive_events!
default_schedule "every_10m"
description <<~MD
The Aftership agent allows you to track your shipment from aftership and emit them into events.
To be able to use the Aftership API, you need to generate an `API Key`. You need a paying plan to use their tracking feature.
You can use this agent to retrieve tracking data.
Provide the `path` for the API endpoint that you'd like to hit. For example, for all active packages, enter `trackings`
(see https://www.aftership.com/docs/api/4/trackings), for a specific package, use `trackings/SLUG/TRACKING_NUMBER`
and replace `SLUG` with a courier code and `TRACKING_NUMBER` with the tracking number. You can request last checkpoint of a package
by providing `last_checkpoint/SLUG/TRACKING_NUMBER` instead.
You can get a list of courier information here `https://www.aftership.com/courier`
Required Options:
* `api_key` - YOUR_API_KEY.
* `path request and its full path`
MD
event_description <<~MD
A typical tracking event have 2 important objects (tracking, and checkpoint) and the tracking/checkpoint looks like this.
"trackings": [
{
"id": "53aa7b5c415a670000000021",
"created_at": "2014-06-25T07:33:48+00:00",
"updated_at": "2014-06-25T07:33:55+00:00",
"tracking_number": "123456789",
"tracking_account_number": null,
"tracking_postal_code": null,
"tracking_ship_date": null,
"slug": "dhl",
"active": false,
"custom_fields": {
"product_price": "USD19.99",
"product_name": "iPhone Case"
},
"customer_name": null,
"destination_country_iso3": null,
"emails": [
"email@yourdomain.com",
"another_email@yourdomain.com"
],
"expected_delivery": null,
"note": null,
"order_id": "ID 1234",
"order_id_path": "http://www.aftership.com/order_id=1234",
"origin_country_iso3": null,
"shipment_package_count": 0,
"shipment_type": null,
"signed_by": "raul",
"smses": [],
"source": "api",
"tag": "Delivered",
"title": "Title Name",
"tracked_count": 1,
"unique_token": "xy_fej9Llg",
"checkpoints": [
{
"slug": "dhl",
"city": null,
"created_at": "2014-06-25T07:33:53+00:00",
"country_name": "VALENCIA - SPAIN",
"message": "Awaiting collection by recipient as requested",
"country_iso3": null,
"tag": "InTransit",
"checkpoint_time": "2014-05-12T12:02:00",
"coordinates": [],
"state": null,
"zip": null
},
...
]
},
...
]
MD
def default_options
{
'api_key' => 'YOUR_API_KEY',
'path' => 'trackings',
}
end
def working?
!recent_error_logs?
end
def validate_options
errors.add(:base, "You need to specify a api key") unless options['api_key'].present?
errors.add(:base, "You need to specify a path request") unless options['path'].present?
end
def check
response = HTTParty.get(event_url, request_options)
events = JSON.parse response.body
create_event payload: events
end
private
def base_url
"https://api.aftership.com/v4/"
end
def event_url
Utils.normalize_uri(base_url + interpolated[:path].to_s).to_s
end
def request_options
{
headers: {
"aftership-api-key" => interpolated['api_key'],
"Content-Type" => "application/json",
}
}
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/change_detector_agent.rb | app/models/agents/change_detector_agent.rb | module Agents
class ChangeDetectorAgent < Agent
cannot_be_scheduled!
description <<~MD
The Change Detector Agent receives a stream of events and emits a new event when a property of the received event changes.
`property` specifies a [Liquid](https://github.com/huginn/huginn/wiki/Formatting-Events-using-Liquid) template that expands to the property to be watched, where you can use a variable `last_property` for the last property value. If you want to detect a new lowest price, try this: `{% assign drop = last_property | minus: price %}{% if last_property == blank or drop > 0 %}{{ price | default: last_property }}{% else %}{{ last_property }}{% endif %}`
`expected_update_period_in_days` is used to determine if the Agent is working.
The resulting event will be a copy of the received event.
MD
event_description <<~MD
This will change based on the source event. If you were event from the ShellCommandAgent, your outbound event might look like:
{
'command' => 'pwd',
'path' => '/home/Huginn',
'exit_status' => '0',
'errors' => '',
'output' => '/home/Huginn'
}
MD
def default_options
{
'property' => '{{output}}',
'expected_update_period_in_days' => 1
}
end
def validate_options
unless options['property'].present? && options['expected_update_period_in_days'].present?
errors.add(:base, "The property and expected_update_period_in_days fields are all required.")
end
end
def working?
event_created_within?(interpolated['expected_update_period_in_days']) && !recent_error_logs?
end
def receive(incoming_events)
incoming_events.each do |event|
interpolation_context.stack do
interpolation_context['last_property'] = last_property
handle(interpolated(event), event)
end
end
end
private
def handle(opts, event = nil)
property = opts['property']
if has_changed?(property)
created_event = create_event payload: event.payload
log("Propagating new event as property has changed to #{property} from #{last_property}",
outbound_event: created_event, inbound_event: event)
update_memory(property)
else
log("Not propagating as incoming event has not changed from #{last_property}.", inbound_event: event)
end
end
def has_changed?(property)
property != last_property
end
def last_property
self.memory['last_property']
end
def update_memory(property)
self.memory['last_property'] = property
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/twilio_agent.rb | app/models/agents/twilio_agent.rb | require 'securerandom'
module Agents
class TwilioAgent < Agent
cannot_be_scheduled!
cannot_create_events!
no_bulk_receive!
gem_dependency_check { defined?(Twilio) }
description <<~MD
The Twilio Agent receives and collects events and sends them via text message (up to 160 characters) or gives you a call when scheduled.
#{'## Include `twilio-ruby` in your Gemfile to use this Agent!' if dependencies_missing?}
It is assumed that events have a `message`, `text`, or `sms` key, the value of which is sent as the content of the text message/call. You can use the EventFormattingAgent if your event does not provide these keys.
Set `receiver_cell` to the number to receive text messages/call and `sender_cell` to the number sending them.
`expected_receive_period_in_days` is maximum number of days that you would expect to pass between events being received by this agent.
If you would like to receive calls, set `receive_call` to `true`. In this case, `server_url` must be set to the URL of your
Huginn installation (probably "https://#{ENV['DOMAIN']}"), which must be web-accessible. Be sure to set http/https correctly.
MD
def default_options
{
'account_sid' => 'ACxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
'auth_token' => 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
'sender_cell' => 'xxxxxxxxxx',
'receiver_cell' => 'xxxxxxxxxx',
'server_url' => 'http://somename.com:3000',
'receive_text' => 'true',
'receive_call' => 'false',
'expected_receive_period_in_days' => '1'
}
end
def validate_options
unless options['account_sid'].present? && options['auth_token'].present? && options['sender_cell'].present? && options['receiver_cell'].present? && options['expected_receive_period_in_days'].present? && options['receive_call'].present? && options['receive_text'].present?
errors.add(:base,
'account_sid, auth_token, sender_cell, receiver_cell, receive_text, receive_call and expected_receive_period_in_days are all required')
end
end
def receive(incoming_events)
memory['pending_calls'] ||= {}
interpolate_with_each(incoming_events) do |event|
message = (event.payload['message'].presence || event.payload['text'].presence || event.payload['sms'].presence).to_s
if message.present?
if boolify(interpolated['receive_call'])
secret = SecureRandom.hex 3
memory['pending_calls'][secret] = message
make_call secret
end
if boolify(interpolated['receive_text'])
message = message.slice 0..1600
send_message message
end
end
end
end
def working?
last_receive_at && last_receive_at > interpolated['expected_receive_period_in_days'].to_i.days.ago && !recent_error_logs?
end
def send_message(message)
client.messages.create from: interpolated['sender_cell'],
to: interpolated['receiver_cell'],
body: message
end
def make_call(secret)
client.calls.create from: interpolated['sender_cell'],
to: interpolated['receiver_cell'],
url: post_url(interpolated['server_url'], secret)
end
def post_url(server_url, secret)
"#{server_url}/users/#{user.id}/web_requests/#{id}/#{secret}"
end
def receive_web_request(params, method, format)
if memory['pending_calls'].has_key? params['secret']
response = Twilio::TwiML::VoiceResponse.new { |r|
r.say(message: memory['pending_calls'][params['secret']], voice: 'woman')
}
memory['pending_calls'].delete params['secret']
[response.to_s, 200]
end
end
def client
@client ||= Twilio::REST::Client.new interpolated['account_sid'], interpolated['auth_token']
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/dropbox_watch_agent.rb | app/models/agents/dropbox_watch_agent.rb | module Agents
class DropboxWatchAgent < Agent
include DropboxConcern
cannot_receive_events!
default_schedule "every_1m"
description <<~MD
The Dropbox Watch Agent watches the given `dir_to_watch` and emits events with the detected changes.
#{'## Include the `dropbox-api` and `omniauth-dropbox` gems in your `Gemfile` and set `DROPBOX_OAUTH_KEY` and `DROPBOX_OAUTH_SECRET` in your environment to use Dropbox Agents.' if dependencies_missing?}
MD
event_description <<~MD
The event payload will contain the following fields:
{
"added": [ {
"path": "/path/to/added/file",
"rev": "1526952fd5",
"modified": "2017-10-14T18:39:41Z"
} ],
"removed": [ ... ],
"updated": [ ... ]
}
MD
def default_options
{
'dir_to_watch' => '/',
'expected_update_period_in_days' => 1
}
end
def validate_options
errors.add(:base, 'The `dir_to_watch` property is required.') unless options['dir_to_watch'].present?
errors.add(:base,
'Invalid `expected_update_period_in_days` format.') unless options['expected_update_period_in_days'].present? && is_positive_integer?(options['expected_update_period_in_days'])
end
def working?
event_created_within?(interpolated['expected_update_period_in_days']) && !received_event_without_error?
end
def check
current_contents = ls(interpolated['dir_to_watch'])
diff = DropboxDirDiff.new(previous_contents, current_contents)
create_event(payload: diff.to_hash) unless previous_contents.nil? || diff.empty?
remember(current_contents)
end
private
def ls(dir_to_watch)
dropbox.ls(dir_to_watch)
.select { |entry| entry.respond_to?(:rev) }
.map { |file| { 'path' => file.path, 'rev' => file.rev, 'modified' => file.server_modified } }
end
def previous_contents
self.memory['contents']
end
def remember(contents)
self.memory['contents'] = contents
end
# == Auxiliary classes ==
class DropboxDirDiff
def initialize(previous, current)
@previous = previous || []
@current = current || []
end
def empty?
(@previous == @current)
end
def to_hash
calculate_diff
{ added: @added, removed: @removed, updated: @updated }
end
private
def calculate_diff
@updated = @current.select do |current_entry|
previous_entry = find_by_path(@previous, current_entry['path'])
(current_entry != previous_entry) && !previous_entry.nil?
end
updated_entries = @updated + @previous.select do |previous_entry|
find_by_path(@updated, previous_entry['path'])
end
@added = @current - @previous - updated_entries
@removed = @previous - @current - updated_entries
end
def find_by_path(array, path)
array.find { |entry| entry['path'] == path }
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/adioso_agent.rb | app/models/agents/adioso_agent.rb | module Agents
class AdiosoAgent < Agent
cannot_receive_events!
default_schedule "every_1d"
description <<~MD
The Adioso Agent will tell you the minimum airline prices between a pair of cities, and within a certain period of time.
The currency is USD. Please make sure that the difference between `start_date` and `end_date` is less than 150 days. You will need to contact [Adioso](http://adioso.com/) for a `username` and `password`.
MD
event_description <<~MD
If flights are present then events look like:
{
"cost": 75.23,
"date": "June 25, 2013",
"route": "New York to Chicago"
}
otherwise
{
"nonetodest": "No flights found to the specified destination"
}
MD
def default_options
{
'start_date' => Date.today.httpdate[0..15],
'end_date' => Date.today.plus_with_duration(100).httpdate[0..15],
'from' => "New York",
'to' => "Chicago",
'username' => "xx",
'password' => "xx",
'expected_update_period_in_days' => "1"
}
end
def working?
event_created_within?(options['expected_update_period_in_days']) && !recent_error_logs?
end
def validate_options
unless %w[
start_date end_date from to username password expected_update_period_in_days
].all? { |field| options[field].present? }
errors.add(:base, "All fields are required")
end
end
def date_to_unix_epoch(date)
date.to_time.to_i
end
def check
auth_options = {
basic_auth: {
username: interpolated[:username],
password: interpolated[:password]
}
}
parse_response = HTTParty.get(
"http://api.adioso.com/v2/search/parse?#{{ q: "#{interpolated[:from]} to #{interpolated[:to]}" }.to_query}",
auth_options
)
fare_request = parse_response["search_url"].gsub(
/(end=)(\d*)([^\d]*)(\d*)/,
"\\1#{date_to_unix_epoch(interpolated['end_date'])}\\3#{date_to_unix_epoch(interpolated['start_date'])}"
)
fare = HTTParty.get fare_request, auth_options
if fare["warnings"]
create_event payload: fare["warnings"]
else
event = fare["results"].min_by { |x| x["cost"] }
event["date"] = Time.at(event["date"]).to_date.httpdate[0..15]
event["route"] = "#{interpolated['from']} to #{interpolated['to']}"
create_event payload: event
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/read_file_agent.rb | app/models/agents/read_file_agent.rb | module Agents
class ReadFileAgent < Agent
include FormConfigurable
include FileHandling
cannot_be_scheduled!
consumes_file_pointer!
def default_options
{
'data_key' => 'data'
}
end
description do
<<~MD
The ReadFileAgent takes events from `FileHandling` agents, reads the file, and emits the contents as a string.
`data_key` specifies the key of the emitted event which contains the file contents.
#{receiving_file_handling_agent_description}
MD
end
event_description <<~MD
Events look like:
{
"data" => '...'
}
MD
form_configurable :data_key, type: :string
def validate_options
if options['data_key'].blank?
errors.add(:base, "The 'data_key' options is required.")
end
end
def working?
received_event_without_error?
end
def receive(incoming_events)
incoming_events.each do |event|
next unless io = get_io(event)
create_event payload: { interpolated['data_key'] => io.read }
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/tumblr_likes_agent.rb | app/models/agents/tumblr_likes_agent.rb | module Agents
class TumblrLikesAgent < Agent
include TumblrConcern
gem_dependency_check { defined?(Tumblr::Client) }
description <<~MD
The Tumblr Likes Agent checks for liked Tumblr posts from a specific blog.
#{'## Include `tumblr_client` and `omniauth-tumblr` in your Gemfile to use this Agent!' if dependencies_missing?}
To be able to use this Agent you need to authenticate with Tumblr in the [Services](/services) section first.
**Required fields:**
`blog_name` The Tumblr URL you're querying (e.g. "staff.tumblr.com")
Set `expected_update_period_in_days` to the maximum amount of time that you'd expect to pass between Events being created by this Agent.
MD
default_schedule 'every_1h'
def validate_options
errors.add(:base, 'blog_name is required') unless options['blog_name'].present?
errors.add(:base,
'expected_update_period_in_days is required') unless options['expected_update_period_in_days'].present?
end
def working?
event_created_within?(options['expected_update_period_in_days']) && !recent_error_logs?
end
def default_options
{
'expected_update_period_in_days' => '10',
'blog_name' => 'someblog',
}
end
def check
memory[:ids] ||= []
memory[:last_liked] ||= 0
# Request Likes of blog_name after the last stored timestamp (or default of 0)
liked = tumblr.blog_likes(options['blog_name'], after: memory[:last_liked])
if liked['liked_posts']
# Loop over all liked posts which came back from Tumblr, add to memory, and create events.
liked['liked_posts'].each do |post|
next if memory[:ids].include?(post['id'])
memory[:ids].push(post['id'])
memory[:last_liked] = post['liked_timestamp'] if post['liked_timestamp'] > memory[:last_liked]
create_event(payload: post)
end
elsif liked['status'] && liked['msg']
# If there was a problem fetching likes (like 403 Forbidden or 404 Not Found) create an error message.
error "Error finding liked posts for #{options['blog_name']}: #{liked['status']} #{liked['msg']}"
end
# Store only the last 50 (maximum the API will return) IDs in memory to prevent performance issues.
memory[:ids] = memory[:ids].last(50) if memory[:ids].length > 50
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/public_transport_agent.rb | app/models/agents/public_transport_agent.rb | require 'date'
require 'cgi'
module Agents
class PublicTransportAgent < Agent
cannot_receive_events!
default_schedule "every_2m"
description <<~MD
The Public Transport Request Agent generates Events based on NextBus GPS transit predictions.
Specify the following user settings:
* agency (string)
* stops (array)
* alert_window_in_minutes (integer)
First, select an agency by visiting [http://www.nextbus.com/predictor/adaAgency.jsp](http://www.nextbus.com/predictor/adaAgency.jsp) and finding your transit system. Once you find it, copy the part of the URL after `?a=`. For example, for the San Francisco MUNI system, you would end up on [http://www.nextbus.com/predictor/adaDirection.jsp?a=**sf-muni**](http://www.nextbus.com/predictor/adaDirection.jsp?a=sf-muni) and copy "sf-muni". Put that into this Agent's agency setting.
Next, find the stop tags that you care about.
Select your destination and lets use the n-judah route. The link should be [http://www.nextbus.com/predictor/adaStop.jsp?a=sf-muni&r=N](http://www.nextbus.com/predictor/adaStop.jsp?a=sf-muni&r=N) Once you find it, copy the part of the URL after `r=`.
The link may not work, but we're just trying to get the part after the r=, so even if it gives an error, continue to the next step.
To find the tags for the sf-muni system, for the N route, visit this URL:
[http://webservices.nextbus.com/service/publicXMLFeed?command=routeConfig&a=sf-muni&r=**N**](http://webservices.nextbus.com/service/publicXMLFeed?command=routeConfig&a=sf-muni&r=N)
The tags are listed as tag="1234". Copy that number and add the route before it, separated by a pipe '|' symbol. Once you have one or more tags from that page, add them to this Agent's stop list. E.g,
agency: "sf-muni"
stops: ["N|5221", "N|5215"]
Remember to pick the appropriate stop, which will have different tags for in-bound and out-bound.
This Agent will generate predictions by requesting a URL similar to the following:
[http://webservices.nextbus.com/service/publicXMLFeed?command=predictionsForMultiStops&a=sf-muni&stops=N|5221&stops=N|5215](http://webservices.nextbus.com/service/publicXMLFeed?command=predictionsForMultiStops&a=sf-muni&stops=N|5221&stops=N|5215)
Finally, set the arrival window that you're interested in. E.g., 5 minutes. Events will be created by the agent anytime a new train or bus comes into that time window.
alert_window_in_minutes: 5
MD
event_description "Events look like this:\n\n " +
Utils.pretty_print({
"routeTitle": "N-Judah",
"stopTag": "5215",
"prediction": {
"epochTime": "1389622846689",
"seconds": "3454",
"minutes": "57",
"isDeparture": "false",
"affectedByLayover": "true",
"dirTag": "N__OB4KJU",
"vehicle": "1489",
"block": "9709",
"tripTag": "5840086"
}
})
def check_url
query = URI.encode_www_form([
["command", "predictionsForMultiStops"],
["a", interpolated["agency"]],
*interpolated["stops"].map { |a| ["stops", a] }
])
"http://webservices.nextbus.com/service/publicXMLFeed?#{query}"
end
def stops
interpolated["stops"].collect { |a| a.split("|").last }
end
def check
hydra = Typhoeus::Hydra.new
request = Typhoeus::Request.new(check_url, followlocation: true)
request.on_success do |response|
page = Nokogiri::XML response.body
predictions = page.css("//prediction")
predictions.each do |pr|
parent = pr.parent.parent
vals = { "routeTitle" => parent["routeTitle"], "stopTag" => parent["stopTag"] }
next unless pr["minutes"] && pr["minutes"].to_i < interpolated["alert_window_in_minutes"].to_i
vals = vals.merge Hash.from_xml(pr.to_xml)
if not_already_in_memory?(vals)
create_event(payload: vals)
log "creating event..."
update_memory(vals)
else
log "not creating event since already in memory"
end
end
end
hydra.queue request
hydra.run
end
def update_memory(vals)
add_to_memory(vals)
cleanup_old_memory
end
def cleanup_old_memory
self.memory["existing_routes"] ||= []
time = 2.hours.ago
self.memory["existing_routes"].reject! { |h| h["currentTime"].to_time <= time }
end
def add_to_memory(vals)
(self.memory["existing_routes"] ||= []) << {
"stopTag" => vals["stopTag"],
"tripTag" => vals["prediction"]["tripTag"],
"epochTime" => vals["prediction"]["epochTime"],
"currentTime" => Time.now
}
end
def not_already_in_memory?(vals)
m = self.memory["existing_routes"] || []
m.select { |h|
h['stopTag'] == vals["stopTag"] &&
h['tripTag'] == vals["prediction"]["tripTag"] &&
h['epochTime'] == vals["prediction"]["epochTime"]
}.count == 0
end
def default_options
{
agency: "sf-muni",
stops: ["N|5221", "N|5215"],
alert_window_in_minutes: 5
}
end
def validate_options
errors.add(:base, 'agency is required') unless options['agency'].present?
errors.add(:base, 'alert_window_in_minutes is required') unless options['alert_window_in_minutes'].present?
errors.add(:base, 'stops are required') unless options['stops'].present?
end
def working?
event_created_within?(2) && !recent_error_logs?
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/hipchat_agent.rb | app/models/agents/hipchat_agent.rb | module Agents
class HipchatAgent < Agent
include FormConfigurable
cannot_be_scheduled!
cannot_create_events!
no_bulk_receive!
gem_dependency_check { defined?(HipChat) }
description <<~MD
The Hipchat Agent sends messages to a Hipchat Room
#{'## Include `hipchat` in your Gemfile to use this Agent!' if dependencies_missing?}
To authenticate you need to set the `auth_token`, you can get one at your Hipchat Group Admin page which you can find here:
`https://`yoursubdomain`.hipchat.com/admin/api`
Change the `room_name` to the name of the room you want to send notifications to.
You can provide a `username` and a `message`. If you want to use mentions change `format` to "text" ([details](https://www.hipchat.com/docs/api/method/rooms/message)).
If you want your message to notify the room members change `notify` to "True".
Modify the background color of your message via the `color` attribute (one of "yellow", "red", "green", "purple", "gray", or "random")
Have a look at the [Wiki](https://github.com/huginn/huginn/wiki/Formatting-Events-using-Liquid) to learn more about liquid templating.
MD
def default_options
{
'auth_token' => '',
'room_name' => '',
'username' => "Huginn",
'message' => "Hello from Huginn!",
'notify' => false,
'color' => 'yellow',
'format' => 'html'
}
end
form_configurable :auth_token, roles: :validatable
form_configurable :room_name, roles: :completable
form_configurable :username
form_configurable :message, type: :text
form_configurable :notify, type: :boolean
form_configurable :color, type: :array, values: ['yellow', 'red', 'green', 'purple', 'gray', 'random']
form_configurable :format, type: :array, values: ['html', 'text']
def validate_auth_token
client.rooms
true
rescue HipChat::UnknownResponseCode
false
end
def complete_room_name
client.rooms.collect { |room| { text: room.name, id: room.name } }
end
def validate_options
errors.add(:base,
"you need to specify a hipchat auth_token or provide a credential named hipchat_auth_token") unless options['auth_token'].present? || credential('hipchat_auth_token').present?
errors.add(:base,
"you need to specify a room_name or a room_name_path") if options['room_name'].blank? && options['room_name_path'].blank?
end
def working?
(last_receive_at.present? && last_error_log_at.nil?) || (last_receive_at.present? && last_error_log_at.present? && last_receive_at > last_error_log_at)
end
def receive(incoming_events)
incoming_events.each do |event|
mo = interpolated(event)
client[mo[:room_name]].send(
mo[:username][0..14],
mo[:message],
notify: boolify(mo[:notify]),
color: mo[:color],
message_format: mo[:format].presence || 'html'
)
end
end
private
def client
@client ||= HipChat::Client.new(interpolated[:auth_token].presence || credential('hipchat_auth_token'))
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/digest_agent.rb | app/models/agents/digest_agent.rb | module Agents
class DigestAgent < Agent
include FormConfigurable
default_schedule "6am"
description <<~MD
The Digest Agent collects any Events sent to it and emits them as a single event.
The resulting Event will have a payload message of `message`. You can use liquid templating in the `message`, have a look at the [Wiki](https://github.com/huginn/huginn/wiki/Formatting-Events-using-Liquid) for details.
Set `expected_receive_period_in_days` to the maximum amount of time that you'd expect to pass between Events being received by this Agent.
If `retained_events` is set to 0 (the default), all received events are cleared after a digest is sent. Set `retained_events` to a value larger than 0 to keep a certain number of events around on a rolling basis to re-send in future digests.
For instance, say `retained_events` is set to 3 and the Agent has received Events `5`, `4`, and `3`. When a digest is sent, Events `5`, `4`, and `3` are retained for a future digest. After Event `6` is received, the next digest will contain Events `6`, `5`, and `4`.
MD
event_description <<~MD
Events look like this:
{
"events": [ event list ],
"message": "Your message"
}
MD
def default_options
{
"expected_receive_period_in_days" => "2",
"message" => "{{ events | map: 'message' | join: ',' }}",
"retained_events" => "0"
}
end
form_configurable :message, type: :text
form_configurable :expected_receive_period_in_days
form_configurable :retained_events
def validate_options
errors.add(:base,
'retained_events must be 0 to 999') unless options['retained_events'].to_i >= 0 && options['retained_events'].to_i < 1000
end
def working?
last_receive_at && last_receive_at > interpolated["expected_receive_period_in_days"].to_i.days.ago && !recent_error_logs?
end
def receive(incoming_events)
self.memory["queue"] ||= []
incoming_events.each do |event|
self.memory["queue"] << event.id
end
if interpolated["retained_events"].to_i > 0 && memory["queue"].length > interpolated["retained_events"].to_i
memory["queue"].shift(memory["queue"].length - interpolated["retained_events"].to_i)
end
end
def check
if self.memory["queue"] && self.memory["queue"].length > 0
events = received_events.where(id: self.memory["queue"]).order(id: :asc).to_a
payload = { "events" => events.map { |event| event.payload } }
payload["message"] = interpolated(payload)["message"]
create_event(payload:)
if interpolated["retained_events"].to_i == 0
self.memory["queue"] = []
end
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/attribute_difference_agent.rb | app/models/agents/attribute_difference_agent.rb | module Agents
class AttributeDifferenceAgent < Agent
cannot_be_scheduled!
description <<~MD
The Attribute Difference Agent receives events and emits a new event with
the difference or change of a specific attribute in comparison to the previous
event received.
`path` specifies the JSON path of the attribute to be used from the event.
`output` specifies the new attribute name that will be created on the original payload
and it will contain the difference or change.
`method` specifies if it should be...
* `percentage_change` eg. Previous value was `160`, new value is `116`. Percentage change is `-27.5`
* `decimal_difference` eg. Previous value was `5.5`, new value is `15.2`. Difference is `9.7`
* `integer_difference` eg. Previous value was `50`, new value is `40`. Difference is `-10`
`decimal_precision` defaults to `3`, but you can override this if you want.
`expected_update_period_in_days` is used to determine if the Agent is working.
The resulting event will be a copy of the received event with the difference
or change added as an extra attribute. If you use the `percentage_change` the
attribute will be formatted as such `{{attribute}}_change`, otherwise it will
be `{{attribute}}_diff`.
All configuration options will be liquid interpolated based on the incoming event.
MD
event_description <<~MD
This will change based on the source event.
MD
def default_options
{
'path' => '.data.rate',
'output' => 'rate_diff',
'method' => 'integer_difference',
'expected_update_period_in_days' => 1
}
end
def validate_options
unless options['path'].present? && options['method'].present? && options['output'].present? && options['expected_update_period_in_days'].present?
errors.add(:base, 'The attribute, method and expected_update_period_in_days fields are all required.')
end
end
def working?
event_created_within?(interpolated['expected_update_period_in_days']) && !recent_error_logs?
end
def receive(incoming_events)
incoming_events.each do |event|
handle(interpolated(event), event)
end
end
private
def handle(opts, event)
opts['decimal_precision'] ||= 3
attribute_value = Utils.value_at(event.payload, opts['path'])
attribute_value = attribute_value.nil? ? 0 : attribute_value
payload = event.payload.deep_dup
if opts['method'] == 'percentage_change'
change = calculate_percentage_change(attribute_value, opts['decimal_precision'])
payload[opts['output']] = change
elsif opts['method'] == 'decimal_difference'
difference = calculate_decimal_difference(attribute_value, opts['decimal_precision'])
payload[opts['output']] = difference
elsif opts['method'] == 'integer_difference'
difference = calculate_integer_difference(attribute_value)
payload[opts['output']] = difference
end
created_event = create_event(payload:)
log('Propagating new event', outbound_event: created_event, inbound_event: event)
update_memory(attribute_value)
end
def calculate_integer_difference(new_value)
return 0 if last_value.nil?
(new_value.to_i - last_value.to_i)
end
def calculate_decimal_difference(new_value, dec_pre)
return 0.0 if last_value.nil?
(new_value.to_f - last_value.to_f).round(dec_pre.to_i)
end
def calculate_percentage_change(new_value, dec_pre)
return 0.0 if last_value.nil?
(((new_value.to_f / last_value.to_f) * 100) - 100).round(dec_pre.to_i)
end
def last_value
memory['last_value']
end
def update_memory(new_value)
memory['last_value'] = new_value
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/s3_agent.rb | app/models/agents/s3_agent.rb | module Agents
class S3Agent < Agent
include FormConfigurable
include FileHandling
emits_file_pointer!
no_bulk_receive!
default_schedule 'every_1h'
gem_dependency_check { defined?(Aws::S3) }
description do
<<~MD
The S3Agent can watch a bucket for changes or emit an event for every file in that bucket. When receiving events, it writes the data into a file on S3.
#{'## Include `aws-sdk-core` in your Gemfile to use this Agent!' if dependencies_missing?}
`mode` must be present and either `read` or `write`, in `read` mode the agent checks the S3 bucket for changed files, with `write` it writes received events to a file in the bucket.
### Universal options
To use credentials for the `access_key` and `access_key_secret` use the liquid `credential` tag like so `{% credential name-of-credential %}`
Select the `region` in which the bucket was created.
### Reading
When `watch` is set to `true` the S3Agent will watch the specified `bucket` for changes. An event will be emitted for every detected change.
When `watch` is set to `false` the agent will emit an event for every file in the bucket on each sheduled run.
#{emitting_file_handling_agent_description}
### Writing
Specify the filename to use in `filename`, Liquid interpolation is possible to change the name per event.
Use [Liquid](https://github.com/huginn/huginn/wiki/Formatting-Events-using-Liquid) templating in `data` to specify which part of the received event should be written.
MD
end
event_description do
"Events will looks like this:\n\n " +
if boolify(interpolated['watch'])
Utils.pretty_print({
"file_pointer" => {
"file" => "filename",
"agent_id" => id
},
"event_type" => "modified/added/removed"
})
else
Utils.pretty_print({
"file_pointer" => {
"file" => "filename",
"agent_id" => id
}
})
end
end
def default_options
{
'mode' => 'read',
'access_key_id' => '',
'access_key_secret' => '',
'watch' => 'true',
'bucket' => "",
'data' => '{{ data }}'
}
end
form_configurable :mode, type: :array, values: %w[read write]
form_configurable :access_key_id, roles: :validatable
form_configurable :access_key_secret, roles: :validatable
form_configurable :region, type: :array,
values: %w[us-east-1 us-west-1 us-west-2 eu-west-1 eu-central-1 ap-southeast-1 ap-southeast-2 ap-northeast-1 ap-northeast-2 sa-east-1]
form_configurable :watch, type: :array, values: %w[true false]
form_configurable :bucket, roles: :completable
form_configurable :filename
form_configurable :data
def validate_options
if options['mode'].blank? || !['read', 'write'].include?(options['mode'])
errors.add(:base, "The 'mode' option is required and must be set to 'read' or 'write'")
end
if options['bucket'].blank?
errors.add(:base, "The 'bucket' option is required.")
end
if options['region'].blank?
errors.add(:base, "The 'region' option is required.")
end
case interpolated['mode']
when 'read'
if options['watch'].blank? || ![true, false].include?(boolify(options['watch']))
errors.add(:base, "The 'watch' option is required and must be set to 'true' or 'false'")
end
when 'write'
if options['filename'].blank?
errors.add(:base, "filename must be specified in 'write' mode")
end
if options['data'].blank?
errors.add(:base, "data must be specified in 'write' mode")
end
end
end
def validate_access_key_id
!!buckets
end
def validate_access_key_secret
!!buckets
end
def complete_bucket
(buckets || []).collect { |room| { text: room.name, id: room.name } }
end
def working?
checked_without_error?
end
def check
return if interpolated['mode'] != 'read'
contents = safely do
get_bucket_contents
end
if boolify(interpolated['watch'])
watch(contents)
else
contents.each do |key, _|
create_event payload: get_file_pointer(key)
end
end
end
def get_io(file)
client.get_object(bucket: interpolated['bucket'], key: file).body
end
def receive(incoming_events)
return if interpolated['mode'] != 'write'
incoming_events.each do |event|
safely do
mo = interpolated(event)
client.put_object(bucket: mo['bucket'], key: mo['filename'], body: mo['data'])
end
end
end
private
def safely
yield
rescue Aws::S3::Errors::AccessDenied => e
error("Could not access '#{interpolated['bucket']}' #{e.class} #{e.message}")
rescue Aws::S3::Errors::ServiceError => e
error("#{e.class}: #{e.message}")
end
def watch(contents)
if last_check_at.nil?
self.memory['seen_contents'] = contents
return
end
new_memory = contents.dup
memory['seen_contents'].each do |key, etag|
if contents[key].blank?
create_event payload: get_file_pointer(key).merge(event_type: :removed)
elsif contents[key] != etag
create_event payload: get_file_pointer(key).merge(event_type: :modified)
end
contents.delete(key)
end
contents.each do |key, _etag|
create_event payload: get_file_pointer(key).merge(event_type: :added)
end
self.memory['seen_contents'] = new_memory
end
def get_bucket_contents
contents = {}
client.list_objects(bucket: interpolated['bucket']).each do |response|
response.contents.each do |file|
contents[file.key] = file.etag
end
end
contents
end
def client
@client ||= Aws::S3::Client.new(credentials: Aws::Credentials.new(interpolated['access_key_id'], interpolated['access_key_secret']),
region: interpolated['region'])
end
def buckets(log = false)
@buckets ||= client.list_buckets.buckets
rescue Aws::S3::Errors::ServiceError => e
false
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/email_digest_agent.rb | app/models/agents/email_digest_agent.rb | require 'net/smtp'
module Agents
class EmailDigestAgent < Agent
include EmailConcern
default_schedule "5am"
cannot_create_events!
description <<~MD
The Email Digest Agent collects any Events sent to it and sends them all via email when scheduled. The number of
used events also relies on the `Keep events` option of the emitting Agent, meaning that if events expire before
this agent is scheduled to run, they will not appear in the email.
By default, the email will have a `subject` and an optional `headline` before listing the Events. If the Events'
payloads contain a `message`, that will be highlighted, otherwise everything in
their payloads will be shown.
You can specify one or more `recipients` for the email, or skip the option in order to send the email to your
account's default email address.
You can provide a `from` address for the email, or leave it blank to default to the value of `EMAIL_FROM_ADDRESS` (`#{ENV['EMAIL_FROM_ADDRESS']}`).
You can provide a `content_type` for the email and specify `text/plain` or `text/html` to be sent.
If you do not specify `content_type`, then the recipient email server will determine the correct rendering.
Set `expected_receive_period_in_days` to the maximum amount of time that you'd expect to pass between Events being received by this Agent.
MD
def default_options
{
'subject' => "You have some notifications!",
'headline' => "Your notifications:",
'expected_receive_period_in_days' => "2"
}
end
def working?
received_event_without_error?
end
def receive(incoming_events)
self.memory['events'] ||= []
incoming_events.each do |event|
self.memory['events'] << event.id
end
end
def check
if self.memory['events'] && self.memory['events'].length > 0
payloads = received_events.reorder("events.id ASC").where(id: self.memory['events']).pluck(:payload).to_a
groups = payloads.map { |payload| present(payload) }
recipients.each do |recipient|
SystemMailer.send_message(
to: recipient,
from: interpolated['from'],
subject: interpolated['subject'],
headline: interpolated['headline'],
content_type: interpolated['content_type'],
groups:
).deliver_now
log "Sent digest mail to #{recipient}"
rescue StandardError => e
error("Error sending digest mail to #{recipient}: #{e.message}")
raise
end
self.memory['events'] = []
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/phantom_js_cloud_agent.rb | app/models/agents/phantom_js_cloud_agent.rb | require 'json'
require 'uri'
module Agents
class PhantomJsCloudAgent < Agent
include ERB::Util
include FormConfigurable
include WebRequestConcern
can_dry_run!
default_schedule 'every_12h'
description <<~MD
This Agent generates [PhantomJs Cloud](https://phantomjscloud.com/) URLs that can be used to render JavaScript-heavy webpages for content extraction.
URLs generated by this Agent are formulated in accordance with the [PhantomJs Cloud API](https://phantomjscloud.com/docs/index.html).
The generated URLs can then be supplied to a Website Agent to fetch and parse the content.
[Sign up](https://dashboard.phantomjscloud.com/dash.html#/signup) to get an api key, and add it in Huginn credentials.
Please see the [Huginn Wiki for more info](https://github.com/huginn/huginn/wiki/Browser-Emulation-Using-PhantomJS-Cloud).
Options:
* `Api key` - PhantomJs Cloud API Key credential stored in Huginn
* `Url` - The url to render
* `Mode` - Create a new `clean` event or `merge` old payload with new values (default: `clean`)
* `Render type` - Render as html, plain text without html tags, or jpg as screenshot of the page (default: `html`)
* `Output as json` - Return the page contents and metadata as a JSON object (default: `false`)
* `Ignore images` - Skip loading of inlined images (default: `false`)
* `Url agent` - A custom User-Agent name (default: `#{default_user_agent}`)
* `Wait interval` - Milliseconds to delay rendering after the last resource is finished loading.
This is useful in case there are any AJAX requests or animations that need to finish up.
This can safely be set to 0 if you know there are no AJAX or animations you need to wait for (default: `1000`ms)
As this agent only provides a limited subset of the most commonly used options, you can follow [this guide](https://github.com/huginn/huginn/wiki/Browser-Emulation-Using-PhantomJS-Cloud) to make full use of additional options PhantomJsCloud provides.
MD
event_description <<~MD
Events look like this:
{
"url": "..."
}
MD
def default_options
{
'mode' => 'clean',
'url' => 'http://xkcd.com',
'render_type' => 'html',
'output_as_json' => false,
'ignore_images' => false,
'user_agent' => self.class.default_user_agent,
'wait_interval' => '1000'
}
end
form_configurable :mode, type: :array, values: ['clean', 'merge']
form_configurable :api_key, roles: :completable
form_configurable :url
form_configurable :render_type, type: :array, values: ['html', 'plainText', 'jpg']
form_configurable :output_as_json, type: :boolean
form_configurable :ignore_images, type: :boolean
form_configurable :user_agent, type: :text
form_configurable :wait_interval
def mode
interpolated['mode'].presence || default_options['mode']
end
def render_type
interpolated['render_type'].presence || default_options['render_type']
end
def output_as_json
boolify(interpolated['output_as_json'].presence ||
default_options['output_as_json'])
end
def ignore_images
boolify(interpolated['ignore_images'].presence ||
default_options['ignore_images'])
end
def user_agent
interpolated['user_agent'].presence || self.class.default_user_agent
end
def wait_interval
interpolated['wait_interval'].presence || default_options['wait_interval']
end
def page_request_settings
prs = {}
prs[:ignoreImages] = ignore_images if ignore_images
prs[:userAgent] = user_agent if user_agent.present?
if wait_interval != default_options['wait_interval']
prs[:wait_interval] = wait_interval
end
prs
end
def build_phantom_url(interpolated)
api_key = interpolated[:api_key]
page_request_hash = {
url: interpolated[:url],
renderType: render_type
}
page_request_hash[:outputAsJson] = output_as_json if output_as_json
page_request_settings_hash = page_request_settings
if page_request_settings_hash.any?
page_request_hash[:requestSettings] = page_request_settings_hash
end
request = page_request_hash.to_json
log "Generated request: #{request}"
encoded = url_encode(request)
"https://phantomjscloud.com/api/browser/v2/#{api_key}/?request=#{encoded}"
end
def check
phantom_url = build_phantom_url(interpolated)
create_event payload: { 'url' => phantom_url }
end
def receive(incoming_events)
incoming_events.each do |event|
interpolate_with(event) do
existing_payload = interpolated['mode'].to_s == 'merge' ? event.payload : {}
phantom_url = build_phantom_url(interpolated)
result = { 'url' => phantom_url }
create_event payload: existing_payload.merge(result)
end
end
end
def complete_api_key
user.user_credentials.map { |c| { text: c.credential_name, id: "{% credential #{c.credential_name} %}" } }
end
def working?
!recent_error_logs? || received_event_without_error?
end
def validate_options
# Check for required fields
errors.add(:base, 'Url is required') unless options['url'].present?
errors.add(:base, 'API key (credential) is required') unless options['api_key'].present?
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/models/agents/stubhub_agent.rb | app/models/agents/stubhub_agent.rb | module Agents
class StubhubAgent < Agent
cannot_receive_events!
description <<~MD
The StubHub Agent creates an event for a given StubHub Event.
It can be used to track how many tickets are available for the event and the minimum and maximum price. All that is required is that you paste in the url from the actual event, e.g. https://www.stubhub.com/outside-lands-music-festival-tickets/outside-lands-music-festival-3-day-pass-san-francisco-golden-gate-park-polo-fields-8-8-2014-9020701/
MD
event_description <<~MD
Events looks like this:
{
"url": "https://stubhub.com/valid-event-url"
"name": "Event Name"
"date": "2014-08-01"
"max_price": "999.99"
"min_price": "100.99"
"total_postings": "50"
"total_tickets": "150"
"venue_name": "Venue Name"
}
MD
default_schedule "every_1d"
def working?
event_created_within?(1) && !recent_error_logs?
end
def default_options
{ 'url' => 'https://stubhub.com/enter-your-event-here' }
end
def validate_options
errors.add(:base, 'url is required') unless options['url'].present?
end
def url
interpolated['url']
end
def check
create_event payload: fetch_stubhub_data(url)
end
def fetch_stubhub_data(url)
StubhubFetcher.call(url)
end
class StubhubFetcher
def self.call(url)
new(url).fields
end
def initialize(url)
@url = url
end
def event_id
/(\d*)\/{0,1}\z/.match(url)[1]
end
def base_url
'https://www.stubhub.com/listingCatalog/select/?q='
end
def build_url
base_url + "%2B+stubhubDocumentType%3Aevent%0D%0A%2B+event_id%3A#{event_id}%0D%0A&start=0&rows=10&wt=json"
end
def response
uri = URI(build_url)
Net::HTTP.get(uri)
end
def parse_response
JSON.parse(response)
end
def fields
stubhub_fields = parse_response['response']['docs'][0]
{
'url' => url,
'name' => stubhub_fields['seo_description_en_US'],
'date' => stubhub_fields['event_date_local'],
'max_price' => stubhub_fields['maxPrice'].to_s,
'min_price' => stubhub_fields['minPrice'].to_s,
'total_postings' => stubhub_fields['totalPostings'].to_s,
'total_tickets' => stubhub_fields['totalTickets'].to_i.to_s,
'venue_name' => stubhub_fields['venue_name']
}
end
private
attr_reader :url
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/app/mailers/system_mailer.rb | app/mailers/system_mailer.rb | class SystemMailer < ActionMailer::Base
default :from => ENV['EMAIL_FROM_ADDRESS'].presence || 'you@example.com'
def send_message(options)
@groups = options[:groups]
@headline = options[:headline]
@body = options[:body]
mail_options = { to: options[:to], subject: options[:subject] }
mail_options[:from] = options[:from] if options[:from].present?
if options[:content_type].present?
mail(mail_options) do |format|
format.text if options[:content_type] == "text/plain"
format.html if options[:content_type] == "text/html"
end
else
mail(mail_options)
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/vendor/gems/dotenv-3.1.0/lib/dotenv.rb | vendor/gems/dotenv-3.1.0/lib/dotenv.rb | require "dotenv/parser"
require "dotenv/environment"
require "dotenv/missing_keys"
require "dotenv/diff"
# Shim to load environment variables from `.env files into `ENV`.
module Dotenv
extend self
# An internal monitor to synchronize access to ENV in multi-threaded environments.
SEMAPHORE = Monitor.new
private_constant :SEMAPHORE
attr_accessor :instrumenter
# Loads environment variables from one or more `.env` files. See `#parse` for more details.
def load(*filenames, overwrite: false, ignore: true)
parse(*filenames, overwrite: overwrite, ignore: ignore) do |env|
instrument(:load, env: env) do |payload|
update(env, overwrite: overwrite)
end
end
end
# Same as `#load`, but raises Errno::ENOENT if any files don't exist
def load!(*filenames)
load(*filenames, ignore: false)
end
# same as `#load`, but will overwrite existing values in `ENV`
def overwrite(*filenames)
load(*filenames, overwrite: true)
end
alias_method :overload, :overwrite
# same as `#overwrite`, but raises Errno::ENOENT if any files don't exist
def overwrite!(*filenames)
load(*filenames, overwrite: true, ignore: false)
end
alias_method :overload!, :overwrite!
# Parses the given files, yielding for each file if a block is given.
#
# @param filenames [String, Array<String>] Files to parse
# @param overwrite [Boolean] Overwrite existing `ENV` values
# @param ignore [Boolean] Ignore non-existent files
# @param block [Proc] Block to yield for each parsed `Dotenv::Environment`
# @return [Hash] parsed key/value pairs
def parse(*filenames, overwrite: false, ignore: true, &block)
filenames << ".env" if filenames.empty?
filenames = filenames.reverse if overwrite
filenames.reduce({}) do |hash, filename|
begin
env = Environment.new(File.expand_path(filename), overwrite: overwrite)
env = block.call(env) if block
rescue Errno::ENOENT
raise unless ignore
end
hash.merge! env || {}
end
end
# Save the current `ENV` to be restored later
def save
instrument(:save) do |payload|
@diff = payload[:diff] = Dotenv::Diff.new
end
end
# Restore `ENV` to a given state
#
# @param env [Hash] Hash of keys and values to restore, defaults to the last saved state
# @param safe [Boolean] Is it safe to modify `ENV`? Defaults to `true` in the main thread, otherwise raises an error.
def restore(env = @diff&.a, safe: Thread.current == Thread.main)
diff = Dotenv::Diff.new(b: env)
return unless diff.any?
unless safe
raise ThreadError, <<~EOE.tr("\n", " ")
Dotenv.restore is not thread safe. Use `Dotenv.modify { }` to update ENV for the duration
of the block in a thread safe manner, or call `Dotenv.restore(safe: true)` to ignore
this error.
EOE
end
instrument(:restore, diff: diff) { ENV.replace(env) }
end
# Update `ENV` with the given hash of keys and values
#
# @param env [Hash] Hash of keys and values to set in `ENV`
# @param overwrite [Boolean] Overwrite existing `ENV` values
def update(env = {}, overwrite: false)
instrument(:update) do |payload|
diff = payload[:diff] = Dotenv::Diff.new do
ENV.update(env.transform_keys(&:to_s)) do |key, old_value, new_value|
# This block is called when a key exists. Return the new value if overwrite is true.
overwrite ? new_value : old_value
end
end
diff.env
end
end
# Modify `ENV` for the block and restore it to its previous state afterwards.
#
# Note that the block is synchronized to prevent concurrent modifications to `ENV`,
# so multiple threads will be executed serially.
#
# @param env [Hash] Hash of keys and values to set in `ENV`
def modify(env = {}, &block)
SEMAPHORE.synchronize do
diff = Dotenv::Diff.new
update(env, overwrite: true)
block.call
ensure
restore(diff.a, safe: true)
end
end
def require_keys(*keys)
missing_keys = keys.flatten - ::ENV.keys
return if missing_keys.empty?
raise MissingKeys, missing_keys
end
private
def instrument(name, payload = {}, &block)
if instrumenter
instrumenter.instrument("#{name}.dotenv", payload, &block)
else
block&.call payload
end
end
end
require "dotenv/rails" if defined?(Rails::Railtie)
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/vendor/gems/dotenv-3.1.0/lib/dotenv/version.rb | vendor/gems/dotenv-3.1.0/lib/dotenv/version.rb | module Dotenv
VERSION = "3.1.0".freeze
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/vendor/gems/dotenv-3.1.0/lib/dotenv/rails.rb | vendor/gems/dotenv-3.1.0/lib/dotenv/rails.rb | # Since rubygems doesn't support optional dependencies, we have to manually check
unless Gem::Requirement.new(">= 6.1").satisfied_by?(Gem::Version.new(Rails.version))
warn "dotenv 3.0 only supports Rails 6.1 or later. Use dotenv ~> 2.0."
return
end
require "dotenv/replay_logger"
require "dotenv/log_subscriber"
Dotenv.instrumenter = ActiveSupport::Notifications
# Watch all loaded env files with Spring
begin
require "spring/commands"
ActiveSupport::Notifications.subscribe("load.dotenv") do |*args|
event = ActiveSupport::Notifications::Event.new(*args)
Spring.watch event.payload[:env].filename if Rails.application
end
rescue LoadError, ArgumentError
# Spring is not available
end
module Dotenv
# Rails integration for using Dotenv to load ENV variables from a file
class Rails < ::Rails::Railtie
delegate :files, :files=, :overwrite, :overwrite=, :autorestore, :autorestore=, :logger, to: "config.dotenv"
def initialize
super()
config.dotenv = ActiveSupport::OrderedOptions.new.update(
# Rails.logger is not available yet, so we'll save log messages and replay them when it is
logger: Dotenv::ReplayLogger.new,
overwrite: false,
files: [
".env.#{env}.local",
(".env.local" unless env.test?),
".env.#{env}",
".env"
].compact,
autorestore: env.test? && !defined?(ClimateControl) && !defined?(IceAge)
)
end
# Public: Load dotenv
#
# This will get called during the `before_configuration` callback, but you
# can manually call `Dotenv::Rails.load` if you needed it sooner.
def load
Dotenv.load(*files.map { |file| root.join(file).to_s }, overwrite: overwrite)
end
def overload
deprecator.warn("Dotenv::Rails.overload is deprecated. Set `Dotenv::Rails.overwrite = true` and call Dotenv::Rails.load instead.")
Dotenv.load(*files.map { |file| root.join(file).to_s }, overwrite: true)
end
# Internal: `Rails.root` is nil in Rails 4.1 before the application is
# initialized, so this falls back to the `RAILS_ROOT` environment variable,
# or the current working directory.
def root
::Rails.root || Pathname.new(ENV["RAILS_ROOT"] || Dir.pwd)
end
# Set a new logger and replay logs
def logger=(new_logger)
logger.replay new_logger if logger.is_a?(ReplayLogger)
config.dotenv.logger = new_logger
end
# The current environment that the app is running in.
#
# When running `rake`, the Rails application is initialized in development, so we have to
# check which rake tasks are being run to determine the environment.
#
# See https://github.com/bkeepers/dotenv/issues/219
def env
@env ||= if defined?(Rake.application) && Rake.application.top_level_tasks.grep(TEST_RAKE_TASKS).any?
env = Rake.application.options.show_tasks ? "development" : "test"
ActiveSupport::EnvironmentInquirer.new(env)
else
::Rails.env
end
end
TEST_RAKE_TASKS = /^(default$|test(:|$)|parallel:spec|spec(:|$))/
def deprecator # :nodoc:
@deprecator ||= ActiveSupport::Deprecation.new
end
# Rails uses `#method_missing` to delegate all class methods to the
# instance, which means `Kernel#load` gets called here. We don't want that.
def self.load
instance.load
end
initializer "dotenv", after: :initialize_logger do |app|
if logger.is_a?(ReplayLogger)
self.logger = ActiveSupport::TaggedLogging.new(::Rails.logger).tagged("dotenv")
end
end
initializer "dotenv.deprecator" do |app|
app.deprecators[:dotenv] = deprecator if app.respond_to?(:deprecators)
end
initializer "dotenv.autorestore" do |app|
require "dotenv/autorestore" if autorestore
end
config.before_configuration { load }
end
Railtie = ActiveSupport::Deprecation::DeprecatedConstantProxy.new("Dotenv::Railtie", "Dotenv::Rails", Dotenv::Rails.deprecator)
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/vendor/gems/dotenv-3.1.0/lib/dotenv/rails-now.rb | vendor/gems/dotenv-3.1.0/lib/dotenv/rails-now.rb | # If you use gems that require environment variables to be set before they are
# loaded, then list `dotenv` in the `Gemfile` before those other gems and
# require `dotenv/load`.
#
# gem "dotenv", require: "dotenv/load"
# gem "gem-that-requires-env-variables"
#
require "dotenv/load"
warn '[DEPRECATION] `require "dotenv/rails-now"` is deprecated. Use `require "dotenv/load"` instead.', caller(1..1).first
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/vendor/gems/dotenv-3.1.0/lib/dotenv/diff.rb | vendor/gems/dotenv-3.1.0/lib/dotenv/diff.rb | module Dotenv
# A diff between multiple states of ENV.
class Diff
# The initial state
attr_reader :a
# The final or current state
attr_reader :b
# Create a new diff. If given a block, the state of ENV after the block will be preserved as
# the final state for comparison. Otherwise, the current ENV will be the final state.
#
# @param a [Hash] the initial state, defaults to a snapshot of current ENV
# @param b [Hash] the final state, defaults to the current ENV
# @yield [diff] a block to execute before recording the final state
def initialize(a: snapshot, b: ENV, &block)
@a, @b = a, b
block&.call self
ensure
@b = snapshot if block
end
# Return a Hash of keys added with their new values
def added
b.slice(*(b.keys - a.keys))
end
# Returns a Hash of keys removed with their previous values
def removed
a.slice(*(a.keys - b.keys))
end
# Returns of Hash of keys changed with an array of their previous and new values
def changed
(b.slice(*a.keys).to_a - a.to_a).map do |(k, v)|
[k, [a[k], v]]
end.to_h
end
# Returns a Hash of all added, changed, and removed keys and their new values
def env
b.slice(*(added.keys + changed.keys)).merge(removed.transform_values { |v| nil })
end
# Returns true if any keys were added, removed, or changed
def any?
[added, removed, changed].any?(&:any?)
end
private
def snapshot
ENV.to_h.freeze
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/vendor/gems/dotenv-3.1.0/lib/dotenv/environment.rb | vendor/gems/dotenv-3.1.0/lib/dotenv/environment.rb | module Dotenv
# A `.env` file that will be read and parsed into a Hash
class Environment < Hash
attr_reader :filename, :overwrite
# Create a new Environment
#
# @param filename [String] the path to the file to read
# @param overwrite [Boolean] whether the parser should assume existing values will be overwritten
def initialize(filename, overwrite: false)
super()
@filename = filename
@overwrite = overwrite
load
end
def load
update Parser.call(read, overwrite: overwrite)
end
def read
File.open(@filename, "rb:bom|utf-8", &:read)
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/vendor/gems/dotenv-3.1.0/lib/dotenv/log_subscriber.rb | vendor/gems/dotenv-3.1.0/lib/dotenv/log_subscriber.rb | require "active_support/log_subscriber"
module Dotenv
# Logs instrumented events
#
# Usage:
# require "active_support/notifications"
# require "dotenv/log_subscriber"
# Dotenv.instrumenter = ActiveSupport::Notifications
#
class LogSubscriber < ActiveSupport::LogSubscriber
attach_to :dotenv
def logger
Dotenv::Rails.logger
end
def load(event)
env = event.payload[:env]
info "Loaded #{color_filename(env.filename)}"
end
def update(event)
diff = event.payload[:diff]
changed = diff.env.keys.map { |key| color_var(key) }
debug "Set #{changed.to_sentence}" if diff.any?
end
def save(event)
info "Saved a snapshot of #{color_env_constant}"
end
def restore(event)
diff = event.payload[:diff]
removed = diff.removed.keys.map { |key| color(key, :RED) }
restored = (diff.changed.keys + diff.added.keys).map { |key| color_var(key) }
if removed.any? || restored.any?
info "Restored snapshot of #{color_env_constant}"
debug "Unset #{removed.to_sentence}" if removed.any?
debug "Restored #{restored.to_sentence}" if restored.any?
end
end
private
def color_filename(filename)
color(Pathname.new(filename).relative_path_from(Dotenv::Rails.root.to_s).to_s, :YELLOW)
end
def color_var(name)
color(name, :CYAN)
end
def color_env_constant
color("ENV", :GREEN)
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/vendor/gems/dotenv-3.1.0/lib/dotenv/load.rb | vendor/gems/dotenv-3.1.0/lib/dotenv/load.rb | require "dotenv"
defined?(Dotenv::Rails) ? Dotenv::Rails.load : Dotenv.load
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/vendor/gems/dotenv-3.1.0/lib/dotenv/parser.rb | vendor/gems/dotenv-3.1.0/lib/dotenv/parser.rb | require "dotenv/substitutions/variable"
require "dotenv/substitutions/command" if RUBY_VERSION > "1.8.7"
module Dotenv
# Error raised when encountering a syntax error while parsing a .env file.
class FormatError < SyntaxError; end
# Parses the `.env` file format into key/value pairs.
# It allows for variable substitutions, command substitutions, and exporting of variables.
class Parser
@substitutions =
[Dotenv::Substitutions::Variable, Dotenv::Substitutions::Command]
LINE = /
(?:^|\A) # beginning of line
\s* # leading whitespace
(?:export\s+)? # optional export
([\w.]+) # key
(?:\s*=\s*?|:\s+?) # separator
( # optional value begin
\s*'(?:\\'|[^'])*' # single quoted value
| # or
\s*"(?:\\"|[^"])*" # double quoted value
| # or
[^\#\r\n]+ # unquoted value
)? # value end
\s* # trailing whitespace
(?:\#.*)? # optional comment
(?:$|\z) # end of line
/x
class << self
attr_reader :substitutions
def call(...)
new(...).call
end
end
def initialize(string, overwrite: false)
@string = string
@hash = {}
@overwrite = overwrite
end
def call
# Convert line breaks to same format
lines = @string.gsub(/\r\n?/, "\n")
# Process matches
lines.scan(LINE).each do |key, value|
@hash[key] = parse_value(value || "")
end
# Process non-matches
lines.gsub(LINE, "").split(/[\n\r]+/).each do |line|
parse_line(line)
end
@hash
end
private
def parse_line(line)
if line.split.first == "export"
if variable_not_set?(line)
raise FormatError, "Line #{line.inspect} has an unset variable"
end
end
end
def parse_value(value)
# Remove surrounding quotes
value = value.strip.sub(/\A(['"])(.*)\1\z/m, '\2')
maybe_quote = Regexp.last_match(1)
value = unescape_value(value, maybe_quote)
perform_substitutions(value, maybe_quote)
end
def unescape_characters(value)
value.gsub(/\\([^$])/, '\1')
end
def expand_newlines(value)
if (@hash["DOTENV_LINEBREAK_MODE"] || ENV["DOTENV_LINEBREAK_MODE"]) == "legacy"
value.gsub('\n', "\n").gsub('\r', "\r")
else
value.gsub('\n', "\\\\\\n").gsub('\r', "\\\\\\r")
end
end
def variable_not_set?(line)
!line.split[1..].all? { |var| @hash.member?(var) }
end
def unescape_value(value, maybe_quote)
if maybe_quote == '"'
unescape_characters(expand_newlines(value))
elsif maybe_quote.nil?
unescape_characters(value)
else
value
end
end
def perform_substitutions(value, maybe_quote)
if maybe_quote != "'"
self.class.substitutions.each do |proc|
value = proc.call(value, @hash, overwrite: @overwrite)
end
end
value
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/vendor/gems/dotenv-3.1.0/lib/dotenv/missing_keys.rb | vendor/gems/dotenv-3.1.0/lib/dotenv/missing_keys.rb | module Dotenv
class Error < StandardError; end
class MissingKeys < Error # :nodoc:
def initialize(keys)
key_word = "key#{(keys.size > 1) ? "s" : ""}"
super("Missing required configuration #{key_word}: #{keys.inspect}")
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/vendor/gems/dotenv-3.1.0/lib/dotenv/template.rb | vendor/gems/dotenv-3.1.0/lib/dotenv/template.rb | module Dotenv
EXPORT_COMMAND = "export ".freeze
# Class for creating a template from a env file
class EnvTemplate
def initialize(env_file)
@env_file = env_file
end
def create_template
File.open(@env_file, "r") do |env_file|
File.open("#{@env_file}.template", "w") do |env_template|
env_file.each do |line|
env_template.puts template_line(line)
end
end
end
end
def template_line(line)
var, value = line.split("=")
template = var.gsub(EXPORT_COMMAND, "")
is_a_comment = var.strip[0].eql?("#")
(value.nil? || is_a_comment) ? line : "#{var}=#{template}"
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/vendor/gems/dotenv-3.1.0/lib/dotenv/cli.rb | vendor/gems/dotenv-3.1.0/lib/dotenv/cli.rb | require "dotenv"
require "dotenv/version"
require "dotenv/template"
require "optparse"
module Dotenv
# The `dotenv` command line interface. Run `$ dotenv --help` to see usage.
class CLI < OptionParser
attr_reader :argv, :filenames, :overwrite
def initialize(argv = [])
@argv = argv.dup
@filenames = []
@ignore = false
@overwrite = false
super("Usage: dotenv [options]")
separator ""
on("-f FILES", Array, "List of env files to parse") do |list|
@filenames = list
end
on("-i", "--ignore", "ignore missing env files") do
@ignore = true
end
on("-o", "--overwrite", "overwrite existing ENV variables") do
@overwrite = true
end
on("--overload") { @overwrite = true }
on("-h", "--help", "Display help") do
puts self
exit
end
on("-v", "--version", "Show version") do
puts "dotenv #{Dotenv::VERSION}"
exit
end
on("-t", "--template=FILE", "Create a template env file") do |file|
template = Dotenv::EnvTemplate.new(file)
template.create_template
end
order!(@argv)
end
def run
Dotenv.load(*@filenames, overwrite: @overwrite, ignore: @ignore)
rescue Errno::ENOENT => e
abort e.message
else
exec(*@argv) unless @argv.empty?
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/vendor/gems/dotenv-3.1.0/lib/dotenv/autorestore.rb | vendor/gems/dotenv-3.1.0/lib/dotenv/autorestore.rb | # Automatically restore `ENV` to its original state after
if defined?(RSpec.configure)
RSpec.configure do |config|
# Save ENV before the suite starts
config.before(:suite) { Dotenv.save }
# Restore ENV after each example
config.after { Dotenv.restore }
end
end
if defined?(ActiveSupport)
ActiveSupport.on_load(:active_support_test_case) do
ActiveSupport::TestCase.class_eval do
# Save ENV before each test
setup { Dotenv.save }
# Restore ENV after each test
teardown do
Dotenv.restore
rescue ThreadError => e
# Restore will fail if running tests in parallel.
warn e.message
warn "Set `config.dotenv.autorestore = false` in `config/initializers/test.rb`" if defined?(Dotenv::Rails)
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/vendor/gems/dotenv-3.1.0/lib/dotenv/tasks.rb | vendor/gems/dotenv-3.1.0/lib/dotenv/tasks.rb | desc "Load environment settings from .env"
task :dotenv do
require "dotenv"
Dotenv.load
end
task environment: :dotenv
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/vendor/gems/dotenv-3.1.0/lib/dotenv/replay_logger.rb | vendor/gems/dotenv-3.1.0/lib/dotenv/replay_logger.rb | module Dotenv
# A logger that can be used before the apps real logger is initialized.
class ReplayLogger < Logger
def initialize
super(nil) # Doesn't matter what this is, it won't be used.
@logs = []
end
# Override the add method to store logs so we can replay them to a real logger later.
def add(*args, &block)
@logs.push([args, block])
end
# Replay the store logs to a real logger.
def replay(logger)
@logs.each { |args, block| logger.add(*args, &block) }
@logs.clear
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/vendor/gems/dotenv-3.1.0/lib/dotenv/substitutions/command.rb | vendor/gems/dotenv-3.1.0/lib/dotenv/substitutions/command.rb | require "English"
module Dotenv
module Substitutions
# Substitute shell commands in a value.
#
# SHA=$(git rev-parse HEAD)
#
module Command
class << self
INTERPOLATED_SHELL_COMMAND = /
(?<backslash>\\)? # is it escaped with a backslash?
\$ # literal $
(?<cmd> # collect command content for eval
\( # require opening paren
(?:[^()]|\g<cmd>)+ # allow any number of non-parens, or balanced
# parens (by nesting the <cmd> expression
# recursively)
\) # require closing paren
)
/x
def call(value, _env, overwrite: false)
# Process interpolated shell commands
value.gsub(INTERPOLATED_SHELL_COMMAND) do |*|
# Eliminate opening and closing parentheses
command = $LAST_MATCH_INFO[:cmd][1..-2]
if $LAST_MATCH_INFO[:backslash]
# Command is escaped, don't replace it.
$LAST_MATCH_INFO[0][1..]
else
# Execute the command and return the value
`#{command}`.chomp
end
end
end
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/vendor/gems/dotenv-3.1.0/lib/dotenv/substitutions/variable.rb | vendor/gems/dotenv-3.1.0/lib/dotenv/substitutions/variable.rb | require "English"
module Dotenv
module Substitutions
# Substitute variables in a value.
#
# HOST=example.com
# URL="https://$HOST"
#
module Variable
class << self
VARIABLE = /
(\\)? # is it escaped with a backslash?
(\$) # literal $
(?!\() # shouldnt be followed by paranthesis
\{? # allow brace wrapping
([A-Z0-9_]+)? # optional alpha nums
\}? # closing brace
/xi
def call(value, env, overwrite: false)
combined_env = overwrite ? ENV.to_h.merge(env) : env.merge(ENV)
value.gsub(VARIABLE) do |variable|
match = $LAST_MATCH_INFO
substitute(match, variable, combined_env)
end
end
private
def substitute(match, variable, env)
if match[1] == "\\"
variable[1..]
elsif match[3]
env.fetch(match[3], "")
else
variable
end
end
end
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/seeds.rb | db/seeds.rb | # This file should contain all the record creation needed to seed the database with its default values.
# The data can then be loaded with the rake db:seed (or created alongside the db with db:setup).
require_relative 'seeds/seeder'
Seeder.seed
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/seeds/seeder.rb | db/seeds/seeder.rb | class Seeder
def self.seed
if User.any?
puts "At least one User already exists, not seeding."
exit
end
user = User.find_or_initialize_by(:email => ENV['SEED_EMAIL'].presence || "admin@example.com")
user.username = ENV['SEED_USERNAME'].presence || "admin"
user.password = ENV['SEED_PASSWORD'].presence || "password"
user.password_confirmation = ENV['SEED_PASSWORD'].presence || "password"
user.invitation_code = User::INVITATION_CODES.first
user.admin = true
user.save!
if DefaultScenarioImporter.seed(user)
puts "NOTE: The example 'SF Weather Agent' will not work until you edit it and put in a free API key from http://www.wunderground.com/weather/api/"
puts "See the Huginn Wiki for more Agent examples! https://github.com/huginn/huginn/wiki"
else
raise('Unable to import the default scenario')
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20161007030910_reset_data_output_agents.rb | db/migrate/20161007030910_reset_data_output_agents.rb | class ResetDataOutputAgents < ActiveRecord::Migration[4.2]
def up
Agents::DataOutputAgent.find_each do |agent|
agent.memory = {}
agent.save(validate: false)
agent.latest_events(true)
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20160307085545_warn_about_duplicate_usernames.rb | db/migrate/20160307085545_warn_about_duplicate_usernames.rb | class WarnAboutDuplicateUsernames < ActiveRecord::Migration[4.2]
def up
names = User.group('LOWER(username)').having('count(*) > 1').pluck('LOWER(username)')
if names.length > 0
puts "-----------------------------------------------------"
puts "--------------------- WARNiNG -----------------------"
puts "-------- Found users with duplicate usernames -------"
puts "-----------------------------------------------------"
puts "For the users to log in using their username they have to change it to a unique name"
names.each do |name|
puts
puts "'#{name}' is used multiple times:"
User.where(['LOWER(username) = ?', name]).each do |u|
puts "#{u.id}\t#{u.email}"
end
end
puts
puts
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20131222211558_add_keep_events_for_to_agents.rb | db/migrate/20131222211558_add_keep_events_for_to_agents.rb | class AddKeepEventsForToAgents < ActiveRecord::Migration[4.2]
def change
add_column :agents, :keep_events_for, :integer, :null => false, :default => 0
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20140216201250_add_propagate_immediately_to_agent.rb | db/migrate/20140216201250_add_propagate_immediately_to_agent.rb | class AddPropagateImmediatelyToAgent < ActiveRecord::Migration[4.2]
def up
add_column :agents, :propagate_immediately, :boolean, :default => false, :null => false
end
def down
remove_column :agents, :propagate_immediately
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20121231170705_add_memory_to_agents.rb | db/migrate/20121231170705_add_memory_to_agents.rb | class AddMemoryToAgents < ActiveRecord::Migration[4.2]
def change
add_column :agents, :memory, :text
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20170731191002_migrate_growl_agent_to_liquid.rb | db/migrate/20170731191002_migrate_growl_agent_to_liquid.rb | class MigrateGrowlAgentToLiquid < ActiveRecord::Migration[5.1]
def change
# Agents::GrowlAgent is no more
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20140906030139_set_events_count_default.rb | db/migrate/20140906030139_set_events_count_default.rb | class SetEventsCountDefault < ActiveRecord::Migration[4.2]
def up
change_column_default(:agents, :events_count, 0)
change_column_null(:agents, :events_count, false, 0)
end
def down
change_column_null(:agents, :events_count, true)
change_column_default(:agents, :events_count, nil)
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20160307084729_add_deactivated_to_agents.rb | db/migrate/20160307084729_add_deactivated_to_agents.rb | class AddDeactivatedToAgents < ActiveRecord::Migration[4.2]
def change
add_column :agents, :deactivated, :boolean, default: false
add_index :agents, [:disabled, :deactivated]
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20140127164931_change_handler_to_medium_text.rb | db/migrate/20140127164931_change_handler_to_medium_text.rb | # Increase handler size to 16MB (consistent with events.payload)
class ChangeHandlerToMediumText < ActiveRecord::Migration[4.2]
def up
if mysql?
change_column :delayed_jobs, :handler, :text, :limit => 16777215
end
end
def down
if mysql?
change_column :delayed_jobs, :handler, :text, :limit => 65535
end
end
def mysql?
ActiveRecord::Base.connection.adapter_name =~ /mysql/i
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20121223203701_create_delayed_jobs.rb | db/migrate/20121223203701_create_delayed_jobs.rb | class CreateDelayedJobs < ActiveRecord::Migration[4.2]
def self.up
create_table :delayed_jobs, :force => true do |table|
table.integer :priority, :default => 0 # Allows some jobs to jump to the front of the queue
table.integer :attempts, :default => 0 # Provides for retries, but still fail eventually.
table.text :handler # YAML-encoded string of the object that will do work
table.text :last_error # reason for last failure (See Note below)
table.datetime :run_at # When to run. Could be Time.zone.now for immediately, or sometime in the future.
table.datetime :locked_at # Set when a client is working on this object
table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead)
table.string :locked_by # Who is working on this object (if locked)
table.string :queue # The name of the queue this job is in
table.timestamps
end
add_index :delayed_jobs, [:priority, :run_at], :name => 'delayed_jobs_priority'
end
def self.down
drop_table :delayed_jobs
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20150808115436_remove_requirement_from_users_invitation_code.rb | db/migrate/20150808115436_remove_requirement_from_users_invitation_code.rb | class RemoveRequirementFromUsersInvitationCode < ActiveRecord::Migration[4.2]
def change
change_column_null :users, :invitation_code, true, ENV['INVITATION_CODE'].presence || 'try-huginn'
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20161004120214_update_pushover_agent_options.rb | db/migrate/20161004120214_update_pushover_agent_options.rb | class UpdatePushoverAgentOptions < ActiveRecord::Migration[4.2]
DEFAULT_OPTIONS = {
'message' => '{{ message | default: text }}',
'device' => '{{ device }}',
'title' => '{{ title | default: subject }}',
'url' => '{{ url }}',
'url_title' => '{{ url_title }}',
'priority' => '{{ priority }}',
'timestamp' => '{{ timestamp }}',
'sound' => '{{ sound }}',
'retry' => '{{ retry }}',
'expire' => '{{ expire }}',
}
def up
Agents::PushoverAgent.find_each do |agent|
options = agent.options
DEFAULT_OPTIONS.each_pair do |key, default|
current = options[key]
options[key] =
if current.blank?
default
else
"#{prefix_for(key)}#{current}#{suffix_for(key)}"
end
end
agent.save!(validate: false)
end
end
def down
Agents::PushoverAgent.transaction do
Agents::PushoverAgent.find_each do |agent|
options = agent.options
DEFAULT_OPTIONS.each_pair do |key, default|
current = options[key]
options[key] =
if current == default
''
else
current[/\A#{Regexp.quote(prefix_for(key))}(.*)#{Regexp.quote(suffix_for(key))}\z/, 1]
end or raise ActiveRecord::IrreversibleMigration, "Cannot revert migration once Pushover agents are configured"
end
agent.save!(validate: false)
end
end
end
def prefix_for(key)
"{% capture _default_ %}"
end
def suffix_for(key)
"{% endcapture %}" << DEFAULT_OPTIONS[key].sub(/(?=\}\}\z)/, '| default: _default_ ')
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20140605032822_add_guid_to_agents.rb | db/migrate/20140605032822_add_guid_to_agents.rb | class AddGuidToAgents < ActiveRecord::Migration[4.2]
class Agent < ActiveRecord::Base; end
def change
add_column :agents, :guid, :string
Agent.find_each do |agent|
agent.update_attribute :guid, SecureRandom.hex
end
change_column_null :agents, :guid, false
add_index :agents, :guid
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20120919061122_enable_lockable_strategy_for_devise.rb | db/migrate/20120919061122_enable_lockable_strategy_for_devise.rb | class EnableLockableStrategyForDevise < ActiveRecord::Migration[4.2]
def up
add_column :users, :failed_attempts, :integer, :default => 0
add_column :users, :unlock_token, :string
add_column :users, :locked_at, :datetime
add_index :users, :unlock_token, :unique => true
end
def down
remove_column :users, :failed_attempts
remove_column :users, :unlock_token
remove_column :users, :locked_at
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20140813110107_set_charset_for_mysql.rb | db/migrate/20140813110107_set_charset_for_mysql.rb | class SetCharsetForMysql < ActiveRecord::Migration[4.2]
def all_models
@all_models ||= [
Agent,
AgentLog,
Event,
Link,
Scenario,
ScenarioMembership,
User,
UserCredential,
Delayed::Job,
]
end
def change
# This is migration is for MySQL only.
return unless mysql?
reversible do |dir|
dir.up do
all_models.each { |model|
table_name = model.table_name
next unless connection.table_exists? table_name
model.columns.each { |column|
name = column.name
type = column.type
limit = column.limit
options = {
limit:,
null: column.null,
default: column.default,
}
case type
when :string, :text
options.update(charset: 'utf8', collation: 'utf8_unicode_ci')
case name
when 'username'
options.update(limit: 767 / 4, charset: 'utf8mb4', collation: 'utf8mb4_unicode_ci')
when 'message', 'options', 'name', 'memory',
'handler', 'last_error', 'payload', 'description'
options.update(charset: 'utf8mb4', collation: 'utf8mb4_bin')
when 'type', 'schedule', 'mode', 'email',
'invitation_code', 'reset_password_token'
options.update(collation: 'utf8_bin')
when 'guid', 'encrypted_password'
options.update(charset: 'ascii', collation: 'ascii_bin')
end
else
next
end
change_column table_name, name, type, **options
}
execute 'ALTER TABLE %s CHARACTER SET utf8 COLLATE utf8_unicode_ci' % table_name
}
execute 'ALTER DATABASE `%s` CHARACTER SET utf8 COLLATE utf8_unicode_ci' % connection.current_database
end
dir.down do
# Do nada; no use to go back
end
end
end
def mysql?
ActiveRecord::Base.connection.adapter_name =~ /mysql/i
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20161124061256_convert_website_agent_template_for_merge.rb | db/migrate/20161124061256_convert_website_agent_template_for_merge.rb | class ConvertWebsiteAgentTemplateForMerge < ActiveRecord::Migration[5.0]
def up
Agents::WebsiteAgent.find_each do |agent|
extract = agent.options['extract'].presence
template = agent.options['template'].presence
next unless extract.is_a?(Hash) && template.is_a?(Hash)
(extract.keys - template.keys).each do |key|
extract[key]['hidden'] = true
end
template.delete_if { |key, value|
extract.key?(key) &&
value.match(/\A\{\{\s*#{Regexp.quote(key)}\s*\}\}\z/)
}
agent.save!(validate: false)
end
end
def down
Agents::WebsiteAgent.find_each do |agent|
extract = agent.options['extract'].presence
template = agent.options['template'].presence
next unless extract.is_a?(Hash) && template.is_a?(Hash)
(extract.keys - template.keys).each do |key|
unless extract[key].delete('hidden').in?([true, 'true'])
template[key] = "{{ #{key} }}"
end
end
agent.save!(validate: false)
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20140505201716_migrate_agents_to_liquid_templating.rb | db/migrate/20140505201716_migrate_agents_to_liquid_templating.rb | require 'liquid_migrator'
class MigrateAgentsToLiquidTemplating < ActiveRecord::Migration[4.2]
def up
Agent.where(:type => 'Agents::HipchatAgent').each do |agent|
LiquidMigrator.convert_all_agent_options(agent)
end
Agent.where(:type => 'Agents::EventFormattingAgent').each do |agent|
agent.options['instructions'] = LiquidMigrator.convert_hash(agent.options['instructions'], {:merge_path_attributes => true, :leading_dollarsign_is_jsonpath => true})
agent.save
end
Agent.where(:type => 'Agents::PushbulletAgent').each do |agent|
LiquidMigrator.convert_all_agent_options(agent)
end
Agent.where(:type => 'Agents::JabberAgent').each do |agent|
LiquidMigrator.convert_all_agent_options(agent)
end
Agent.where(:type => 'Agents::DataOutputAgent').each do |agent|
LiquidMigrator.convert_all_agent_options(agent)
end
Agent.where(:type => 'Agents::TranslationAgent').each do |agent|
agent.options['content'] = LiquidMigrator.convert_hash(agent.options['content'], {:merge_path_attributes => true, :leading_dollarsign_is_jsonpath => true})
agent.save
end
Agent.where(:type => 'Agents::TwitterPublishAgent').each do |agent|
if (message = agent.options.delete('message_path')).present?
agent.options['message'] = "{{#{message}}}"
agent.save
end
end
Agent.where(:type => 'Agents::TriggerAgent').each do |agent|
agent.options['message'] = LiquidMigrator.convert_make_message(agent.options['message'])
agent.save
end
Agent.where(:type => 'Agents::PeakDetectorAgent').each do |agent|
agent.options['message'] = LiquidMigrator.convert_make_message(agent.options['message'])
agent.save
end
Agent.where(:type => 'Agents::HumanTaskAgent').each do |agent|
LiquidMigrator.convert_all_agent_options(agent)
end
end
def down
raise ActiveRecord::IrreversibleMigration, "Cannot revert migration to Liquid templating"
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20130107050049_add_invitation_code_to_users.rb | db/migrate/20130107050049_add_invitation_code_to_users.rb | class AddInvitationCodeToUsers < ActiveRecord::Migration[4.2]
def change
add_column :users, :invitation_code, :string
change_column :users, :invitation_code, :string, :null => false
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20140602014917_add_indices_to_scenarios.rb | db/migrate/20140602014917_add_indices_to_scenarios.rb | class AddIndicesToScenarios < ActiveRecord::Migration[4.2]
def change
add_index :scenarios, [:user_id, :guid], :unique => true
add_index :scenario_memberships, :agent_id
add_index :scenario_memberships, :scenario_id
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20140210062747_add_mode_to_user_credentials.rb | db/migrate/20140210062747_add_mode_to_user_credentials.rb | class AddModeToUserCredentials < ActiveRecord::Migration[4.2]
def change
add_column :user_credentials, :mode, :string, :default => 'text', :null => false
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20160823151303_set_emit_error_event_for_twitter_action_agents.rb | db/migrate/20160823151303_set_emit_error_event_for_twitter_action_agents.rb | class SetEmitErrorEventForTwitterActionAgents < ActiveRecord::Migration[4.2]
def up
Agents::TwitterActionAgent.find_each do |agent|
agent.options['emit_error_events'] = 'true'
agent.save!(validate: false)
end
end
def down
Agents::TwitterActionAgent.find_each do |agent|
agent.options.delete('emit_error_events')
agent.save!(validate: false)
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20131105063248_add_expires_at_to_events.rb | db/migrate/20131105063248_add_expires_at_to_events.rb | class AddExpiresAtToEvents < ActiveRecord::Migration[4.2]
def change
add_column :events, :expires_at, :datetime
add_index :events, :expires_at
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20160423163416_add_xml_namespace_option_to_data_output_agents.rb | db/migrate/20160423163416_add_xml_namespace_option_to_data_output_agents.rb | class AddXmlNamespaceOptionToDataOutputAgents < ActiveRecord::Migration[4.2]
def up
Agents::DataOutputAgent.find_each do |agent|
agent.options['ns_media'] = 'true'
agent.options['ns_itunes'] = 'true'
agent.save!(validate: false)
end
end
def down
Agents::DataOutputAgent.find_each do |agent|
agent.options.delete 'ns_media'
agent.options.delete 'ns_itunes'
agent.save!(validate: false)
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20130509053743_add_last_webhook_at_to_agents.rb | db/migrate/20130509053743_add_last_webhook_at_to_agents.rb | class AddLastWebhookAtToAgents < ActiveRecord::Migration[4.2]
def change
add_column :agents, :last_webhook_at, :datetime
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20140723110551_adopt_xpath_in_website_agent.rb | db/migrate/20140723110551_adopt_xpath_in_website_agent.rb | class AdoptXpathInWebsiteAgent < ActiveRecord::Migration[4.2]
class Agent < ActiveRecord::Base
include JsonSerializedField
json_serialize :options
end
def up
Agent.where(type: 'Agents::WebsiteAgent').each do |agent|
extract = agent.options['extract']
next unless extract.is_a?(Hash) && extract.all? { |name, detail|
detail.key?('xpath') || detail.key?('css')
}
agent.options_will_change!
agent.options['extract'].each { |name, extraction|
case
when extraction.delete('text')
extraction['value'] = 'string(.)'
when attr = extraction.delete('attr')
extraction['value'] = "@#{attr}"
end
}
agent.save!
end
end
def down
raise ActiveRecord::IrreversibleMigration, "Cannot revert this migration"
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20121220053905_create_agents.rb | db/migrate/20121220053905_create_agents.rb | class CreateAgents < ActiveRecord::Migration[4.2]
def change
create_table :agents do |t|
t.integer :user_id
t.text :options
t.string :type
t.string :name
t.string :schedule
t.integer :events_count
t.datetime :last_check_at
t.datetime :last_receive_at
t.integer :last_checked_event_id
t.timestamps
end
add_index :agents, [:user_id, :created_at]
add_index :agents, :type
add_index :agents, :schedule
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20140901143732_add_control_links.rb | db/migrate/20140901143732_add_control_links.rb | class AddControlLinks < ActiveRecord::Migration[4.2]
def change
create_table :control_links do |t|
t.integer :controller_id, null: false
t.integer :control_target_id, null: false
t.timestamps
end
add_index :control_links, [:controller_id, :control_target_id], unique: true
add_index :control_links, :control_target_id
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20140408150825_rename_webhook_to_web_request.rb | db/migrate/20140408150825_rename_webhook_to_web_request.rb | class RenameWebhookToWebRequest < ActiveRecord::Migration[4.2]
def up
rename_column :agents, :last_webhook_at, :last_web_request_at
end
def down
rename_column :agents, :last_web_request_at, :last_webhook_at
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20160419150930_add_icon_to_scenarios.rb | db/migrate/20160419150930_add_icon_to_scenarios.rb | class AddIconToScenarios < ActiveRecord::Migration[4.2]
def change
add_column :scenarios, :icon, :string
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20120728210244_devise_create_users.rb | db/migrate/20120728210244_devise_create_users.rb | class DeviseCreateUsers < ActiveRecord::Migration[4.2]
def change
create_table(:users) do |t|
## Database authenticatable
t.string :email, :null => false, :default => ""
t.string :encrypted_password, :null => false, :default => ""
## Recoverable
t.string :reset_password_token
t.datetime :reset_password_sent_at
## Rememberable
t.datetime :remember_created_at
## Trackable
t.integer :sign_in_count, :default => 0
t.datetime :current_sign_in_at
t.datetime :last_sign_in_at
t.string :current_sign_in_ip
t.string :last_sign_in_ip
## Confirmable
# t.string :confirmation_token
# t.datetime :confirmed_at
# t.datetime :confirmation_sent_at
# t.string :unconfirmed_email # Only if using reconfirmable
## Lockable
# t.integer :failed_attempts, :default => 0 # Only if lock strategy is :failed_attempts
# t.string :unlock_token # Only if unlock strategy is :email or :both
# t.datetime :locked_at
## Token authenticatable
# t.string :authentication_token
t.timestamps
end
add_index :users, :email, :unique => true
add_index :users, :reset_password_token, :unique => true
# add_index :users, :confirmation_token, :unique => true
# add_index :users, :unlock_token, :unique => true
# add_index :users, :authentication_token, :unique => true
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20121216025930_create_events.rb | db/migrate/20121216025930_create_events.rb | class CreateEvents < ActiveRecord::Migration[4.2]
def change
create_table :events do |t|
t.integer :user_id
t.integer :agent_id
t.decimal :lat, :precision => 15, :scale => 10
t.decimal :lng, :precision => 15, :scale => 10
t.text :payload
t.timestamps
end
add_index :events, [:user_id, :created_at]
add_index :events, [:agent_id, :created_at]
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
huginn/huginn | https://github.com/huginn/huginn/blob/8edec55aab03d4e3f13b205db02d21dc36e34e4f/db/migrate/20160301113717_add_confirmable_attributes_to_users.rb | db/migrate/20160301113717_add_confirmable_attributes_to_users.rb | class AddConfirmableAttributesToUsers < ActiveRecord::Migration[4.2]
def change
change_table(:users) do |t|
## Confirmable
t.string :confirmation_token
t.datetime :confirmed_at
t.datetime :confirmation_sent_at
t.string :unconfirmed_email # Only if using reconfirmable
end
add_index :users, :confirmation_token, unique: true
if ENV['REQUIRE_CONFIRMED_EMAIL'] != 'true' && ActiveRecord::Base.connection.column_exists?(:users, :confirmed_at)
User.update_all(confirmed_at: Time.zone.now)
end
end
end
| ruby | MIT | 8edec55aab03d4e3f13b205db02d21dc36e34e4f | 2026-01-04T15:37:27.328445Z | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.