Eviction injunction

Clifford Ray Hackett

CRH2123@iCloud.com

671-787-2345

Guam Housing and Urban Renewal Authority (GHURA)

Subject: Request for Injunction to Halt Eviction

Dear Sir/Madam,

I am writing to request an injunction to halt eviction proceedings initiated by the Guam Housing and Urban Renewal Authority (GHURA).

Background:

I have been a tenant at the above address for years and recently received an eviction notice I believe this eviction to be unlawful and unjust.

Grounds for Injunction:

1. Violation of Lease Agreement:

The eviction notice violates lease terms.

2. Lack of Proper Notice:

Guam law requires days’ notice for evictions. I received NONE, which is insufficient.

3. Retaliatory Eviction:

The eviction may be retaliatory following my recent complaints Retaliatory evictions are prohibited.

4. Potential Discrimination:

I am concerned that this eviction may be discriminatory based on [race, gender, disability, etc.], which would violate the Fair Housing Act and other applicable laws.

5. Hardship:

The eviction would impose severe hardship on me.

Request:

I request the immediate cessation of eviction proceedings and an injunction to prevent further actions until a full investigation and hearing are conducted.

Please confirm receipt of this letter and provide a prompt response. I am prepared to provide additional documentation to support this request.

Sincerely,

Clifford Ray Hackett

Improved CRAWLER 241112

import requests import time import random from bs4 import BeautifulSoup

# Global variables URL_list = [] URL_parent_Category = {} categoryLevel = {} history = {} final_URLs = {} parsed = 0 n_URLs = 1 max_URLs = 5000

# Base URLs URL_base1 = "https://mathworld.wolfram.com/topics/" # Directory pages URL_base2 = "https://mathworld.wolfram.com/" # Final pages

# Seed URL and Category seed_URL = "https://mathworld.wolfram.com/topics/ProbabilityandStatistics.html" seed_category = "Probability and Statistics" categoryLevel[seed_category] = 1

# Validate function to filter unwanted links def validate(string): ignore_list = ['about/', 'classroom/', 'contact/', 'whatsnew/', 'letters/'] return len(string) <= 60 and all(i not in string for i in ignore_list) and ‘topics’ not in string

# Request with retries and random user-agent def get_request(url, retries=3, timeout=5): headers = {‘User-Agent’: random.choice([
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 Chrome/58.0 Safari/537.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 Safari/605.1.15'
])} for attempt in range(retries): try: response = requests.get(url, timeout=timeout, headers=headers) return response except requests.RequestException: time.sleep(2 + random.uniform(0, 1.5)) # Randomized sleep return None

# Update URL and category lists def update_lists(new_URL, new_category, parent_category, file): URL_parent_Category[new_URL] = new_category categoryLevel[new_category] = categoryLevel[parent_category] + 1 level = str(categoryLevel[new_category]) file.write(f"{level}\t{new_category}\t{parent_category}\n") file.flush()

# Crawling phase def crawl(seed_URL, seed_category, file1, file2): global parsed, n_URLs URL_list.append(seed_URL) URL_parent_Category[seed_URL] = seed_category while parsed < min(max_URLs, n_URLs): URL = URL_list[parsed] parent_category = URL_parent_Category[URL] level = categoryLevel[parent_category] time.sleep(2 + random.uniform(0, 1.5)) parsed += 1 if URL in history: file1.write(f"{URL}\tDuplicate\t{parent_category}\t{level}\n") continue resp = get_request(URL) history[URL] = resp.status_code if resp else "Error" if not resp or resp.status_code != 200: file1.write(f"{URL}\tError:{resp.status_code if resp else ‘Timeout’}\t{parent_category}\t{level}\n") continue file1.write(f"{URL}\tParsed\t{parent_category}\t{level}\n") soup = BeautifulSoup(resp.text, ‘html.parser’) for link in soup.find_all(‘a’, href=True): href = link['href'] new_category = link.text.strip() if ‘topics/’ in href: new_URL = URL_base1 + href.split("/topics/")[1] URL_list.append(new_URL) update_lists(new_URL, new_category, parent_category, file2) file1.write(f"{new_URL}\tQueued\t{new_category}\t{level+1}\n") n_URLs += 1 elif validate(href): new_URL = URL_base2 + href.split("/")[1] final_URLs[new_URL] = (new_category, parent_category, level+1) update_lists(new_URL, new_category, parent_category, file2) file1.write(f"{new_URL}\tEndNode\t{new_category}\t{level+1}\n") print(f"Crawling completed. Parsed {parsed} URLs out of {n_URLs}.")

# Content extraction phase def extract_content(begin, end): with open("list_final_URLs.txt", "r", encoding="utf-8") as file_input, \ open(f"crawl_final_{begin}_{end}.txt", "w", encoding="utf-8") as file_output: for line in file_input: count, URL, category = line.split("\t")[:3] if begin <= int(count) <= end: resp = get_request(URL) if resp and resp.status_code == 200: page = resp.text.replace(‘\n’, ‘ ‘) file_output.write(f"{URL}\t{category}\t~{page}\n") else: print(f"Error fetching {URL}: {resp.status_code if resp else ‘Timeout’}") print(f"Content extraction from {begin} to {end} completed.")

# Main execution if __name__ == "__main__": with open("crawl_log.txt", "w", encoding="utf-8") as file1, open("crawl_categories.txt", "w", encoding="utf-8") as file2: crawl(seed_URL, seed_category, file1, file2) extract_content(begin=1, end=500) print("All tasks completed successfully.")

Mahalo

SIGNATURE:
Clifford "RAY" Hackett www.rayis.me RESUME: www.rayis.me/resume

I founded www.adapt.org in 1980 it now has over 50 million members.
$500 of material=World’s fastest hydrofoil sailboat. http://sunrun.biz

Combined crawler and extraction

import requests
import time
import random
from bs4 import BeautifulSoup

Global variables

URL_list = []
URL_parent_Category = {}
categoryLevel = {}
history = {}
final_URLs = {}
parsed = 0
n_URLs = 1
max_URLs = 5000

URLs base

URL_base1 = "https://mathworld.wolfram.com/topics/" # for directory pages (root)
URL_base2 = "https://mathworld.wolfram.com/" # for final pages

Seed URL and Category

seed_URL = "https://mathworld.wolfram.com/topics/ProbabilityandStatistics.html"
seed_category = "Probability and Statistics"
categoryLevel[seed_category] = 1 # Start category level

Proxy setup (optional, uncomment and modify if needed)

proxies = {

‘http’: ‘http://username:password@proxy_url:proxy_port‘,

‘https’: ‘https://username:password@proxy_url:proxy_port‘,

}

Validate function to filter unwanted links

def validate(string):
Ignore = ['about/', 'classroom/', 'contact/', 'whatsnew/', 'letters/']
return len(string) <= 60 and string not in Ignore and ‘topics’ not in string

Request with retries and custom headers

def get_request_with_retries(url, retries=3, timeout=5):
headers = {
‘User-Agent’: random.choice([
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15'
])
}

for i in range(retries): try: # Uncomment `proxies` argument if using proxy # resp = requests.get(url, timeout=timeout, proxies=proxies, headers=headers) resp = requests.get(url, timeout=timeout, headers=headers) return resp except requests.exceptions.RequestException as e: print(f"Attempt {i+1} failed for URL: {url}. Error: {e}") time.sleep(2 + random.uniform(0, 1.5)) # Randomized sleep return None 

Update lists of URLs and categories

def update_lists(new_URL, new_category, parent_category, file):
URL_parent_Category[new_URL] = new_category
categoryLevel[new_category] = 1 + categoryLevel[parent_category]
level = str(categoryLevel[new_category])
file.write(f"{level}\t{new_category}\t{parent_category}\n")
file.flush()

Crawling phase (Step 1)

def crawl(seed_URL, seed_category, file1, file2):
global parsed, n_URLs
URL_list.append(seed_URL)
URL_parent_Category[seed_URL] = seed_category
categoryLevel[seed_category] = 1

while parsed < min(max_URLs, n_URLs): URL = URL_list[parsed] parent_category = URL_parent_Category[URL] level = categoryLevel[parent_category] time.sleep(2 + random.uniform(0, 1.5)) # Slow down crawling parsed += 1 if URL in history: print(f"Duplicate: {URL}") file1.write(f"{URL}\tDuplicate\t{parent_category}\t{level}\n") else: print(f"Parsing: {parsed}/{n_URLs}: {URL}") resp = get_request_with_retries(URL) if resp: history[URL] = resp.status_code else: history[URL] = "Error" if not resp or resp.status_code != 200: reason = resp.reason if resp else "Timeout" print(f"Failed: {URL} - {reason}") file1.write(f"{URL}\tError:{resp.status_code if resp else 'Timeout'}\t{reason}\t{parent_category}\t{level}\n") else: file1.write(f"{URL}\tParsed\t{parent_category}\t{level}\n") page = resp.text.replace('\n', ' ') soup = BeautifulSoup(page, 'html.parser') # Scrape intermediate directories (Type-1) for link in soup.find_all('a', href=True): href = link['href'] if 'topics/' in href: new_URL = URL_base1 + href.split("/topics/")[1] new_category = link.text.strip() URL_list.append(new_URL) update_lists(new_URL, new_category, parent_category, file2) file1.write(f"{new_URL}\tQueued\t{new_category}\t{level+1}\n") n_URLs += 1 # Scrape final pages (Type-2) for link in soup.find_all('a', href=True): href = link['href'] if validate(href): new_URL = URL_base2 + href.split("/")[1] new_category = link.text.strip() final_URLs[new_URL] = (new_category, parent_category, level+1) update_lists(new_URL, new_category, parent_category, file2) file1.write(f"{new_URL}\tEndNode\t{new_category}\t{level+1}\n") print(f"Crawling completed. Parsed {parsed} URLs out of {n_URLs}.") 

Content extraction phase (Step 2)

def extract_content(begin, end):
with open("list_final_URLs.txt", "r", encoding="utf-8") as file_input:
Lines = file_input.readlines()

with open(f"crawl_final_{begin}_{end}.txt", "w", encoding="utf-8") as file_output: for line in Lines: count, URL, category = line.split("\t")[:3] if int(count) >= begin and int(count) <= end: print(f"Page {count}: {URL}") resp = get_request_with_retries(URL) if resp and resp.status_code == 200: page = resp.text.replace('\n', ' ') file_output.write(f"{URL}\t{category}\t~{page}\n") else: print(f"Error fetching {URL}: {resp.status_code if resp else 'Timeout'}") print(f"Content extraction from {begin} to {end} completed.") 

Main execution

if name == "main":
# Open files for logging
with open("crawl_log.txt", "w", encoding="utf-8") as file1, open("crawl_categories.txt", "w", encoding="utf-8") as file2:
crawl(seed_URL="https://mathworld.wolfram.com/topics/ProbabilityandStatistics.html", seed_category="Probability and Statistics", file1=file1, file2=file2)

# Extract content from final URLs (Modify begin and end as needed) extract_content(begin=1, end=500) # Completion message print("All tasks completed successfully.")

Mahalo

SIGNATURE:
Clifford "RAY" Hackett www.rayis.me RESUME: www.rayis.me/resume

I founded www.adapt.org in 1980 it now has over 50 million members.
$500 of material=World’s fastest hydrofoil sailboat. http://sunrun.biz

Seasteading fixes worlds 8 worst problems 1. Poor (enrich) fishing and algae farms 2. AI R (clean) no pesticides 3. Water(clean) less chemicals 4. nature(balance) less pollution 5. sick(HEAL) less pollution 6. hungry (feed) seafood farms 7. Operate(power.) sustainable. 8. PEACE(end war) no land dispute

Seasteading fixes worlds 8 worst problems

  1. Poor (enrich) fishing and algae farms
  2. AI R (clean) no pesticides
  3. Water(clean) less chemicals
  4. nature(balance) less pollution
  5. sick(HEAL) less pollution
  6. hungry (feed) seafood farms
  7. Operate(power.) sustainable.
  8. PEACE(end war) no land dispute

Mahalo

SIGNATURE:
Clifford "RAY" Hackett www.rayis.me RESUME: www.rayis.me/resume

I founded www.adapt.org in 1980 it now has over 50 million members.
$500 of material=World’s fastest hydrofoil sailboat. http://sunrun.biz

Philippines marriage laws

Philippine laws relating to marital status follow Filipinos wherever they may go. Thus, as a rule, a married Filipino remains married even if a divorce is obtained abroad because divorce is generally not recognized in the Philippines. Luckily there is a limited exception for recognition of a foreign divorce decree which would allow a divorced Filipino to remarry.

Under the second paragraph of Article 26, Family Code of the Philippines, if a validly celebrated marriage between a Filipino and a foreigner is dissolved by a foreign divorce decree capacitating the foreign spouse to remarry, the Filipino spouse can also remarry. In other words, for the divorce to be recognized in the Philippines, the following conditions must exist: (1) the marriage was between a Filipino and a foreigner; (2) the marriage was dissolved by a foreign divorce decree; and (3) the divorce was obtained by the non-Filipino spouse

Mahalo

SIGNATURE:
Clifford "RAY" Hackett www.rayis.me RESUME: www.rayis.me/resume

I founded www.adapt.org in 1980 it now has over 50 million members.
$500 of material=World’s fastest hydrofoil sailboat. http://sunrun.biz

Letter to EEOC about United Airlines

EEOC.HQ: 131 M Street, NE,Washington, DC 20507, 202-663-4900

Clifford "RAY" Hackett 440 Kapiolani, Hilo,HI,96720

3659745 6717872345

Events: Blocked from United airlines employment

Registered in person with others, who were hired Comments made by United personnel indicate I would not be hired for the following reasons

Why: (race,(told hoales not welcome) color,(white, told I look like evil ghost) religion, (told Christianity is evil) sex (told men are bad) age (, told too old), disability,(told blind/deaf people are problems) (genetic ( told my ancestor bombed their country);

Injury suffered: (financial losses, physical injuries caused by this situation United Airlines employee shoved me as I stepped on a slippery spot and fell I am the only job fair attendant, not hired with the $25,000 bonus

eeo

Mahalo

SIGNATURE:
Clifford "RAY" Hackett www.rayis.me RESUME: www.rayis.me/resume

I founded www.adapt.org in 1980 it now has over 50 million members.
$500 of material=World’s fastest hydrofoil sailboat. http://sunrun.biz

Solar NowNow