Cache-control directive (only-if-cached) changed by dev-tools?

We are working on a Progessive Web App, for which the service worker intercepts the network traffic (via the fetch event handler). We have noticed, that sometimes a certain request fails here, because Request.cache is only-if-cached and Request.mode is no-cors, but not same-origin.

So it is similar to this problem.

Then I’ve noticed, that this happens only when the Chrome (v 65) DevTools are not opened. Does anybody notice the same phenomenon and does anybody have an idea, why this happens this way?

Parts of the request:

bodyUsed: false,
cache: "only-if-cached",
credentials: "include",
destination: "unknown",
headers: Headers {},
integrity: "",
method: "GET",
mode: "no-cors",
redirect: "follow",
referrer: "",
referrerPolicy: "no-referrer-when-downgrade",
url: "https://example.com/path/to/app-name/#!

We are handling this problem like this, but I’m afraid, that this is not appropriate.

serviceWorkerGlobal.addEventListener('fetch', function(event)
{
    if (event.request.cache === 'only-if-cached' && event.request.mode !== 'same-origin') {
        var oStrangeRequest = event.request.clone();
        console.log('Fetch. Request cache has only-if-cached, but not same-origin.',
            oStrangeRequest.cache, oStrangeRequest.mode,
            'request redirect:',
            oStrangeRequest.redirect, oStrangeRequest.url, oStrangeRequest);
        return;
    }
    // ...
});

Javascript – How to silently retrieve the source of a webpage that won’t respond to fetch() request

I have a Firefox/Chrome web extension. When the user clicks the extension’s button, the extension should retrieve the text of a dynamically generated URL. Because I am writing a web extension, I have no control over this URL.

Here’s an example of what should (but doesn’t) happen:

1) User clicks my extension’s button

2) The extension generates the following URL – this URL will change each time the extension’s button is clicked:

Example URL:

https://smmry.com/sm_portal.php?&SM_TOKEN=2635119454&SM_POST_SAVE=0&SM_REDUCTION=-1&SM_CHARACTER=-1&SM_LENGTH=7&SM_URL=http://money.cnn.com/2018/04/03/investing/amazon-stock-widely-held/index.html

3) The extension retrieves the text of the file at that URL and does something with it.

Here’s a simplified version of my code:


console.log("Fetching tokenSite");
fetch(generatedURL).then((response) => {
                console.log("Token site fetched");
                console.log(response);
            })

However, what actually happens is the following:

enter image description here

However, I can successfully manually open the URL or use browser.tabs.create({ url: generatedURL}); to open the URL in a new page.

I suspect the server is preventing the fetch() request from working because it is from an extension.

What are ways I can retrieve the text of the file located at that URL?

Someone has suggested loading the URL inside of an iFrame, but I don’t know how to do that (especially in the context of a web extension). So an example of that would be helpful.

Sidenote – once the console did not even log “Token site fetched”, rather there was just never a response from the XHR request.

Here is my manifest.json

{
"manifest_version": 2,
    "name": "Summarizer",
    "version": "1.0",

    "description": "Summarizes webpages",

    "permissions": [
        "tabs",
        "downloads",
        "*://*.smmry.com/*"
    ],

    "icons": {
        "48": "icons/border-48.png"
    },

    "browser_action": {
        "browser_style": true,
        "default_popup": "popup/choose_page.html",
        "default_icon": {
            "16": "icons/summarizer-icon-16.png",
            "32": "icons/summarizer-icon-32.png"
        }
    }
}

How to delay fetch() until website has finished loading dynamic content

I’m using the following javascript code to download the source of a webpage in the form of a html file. This code is currently run whenever the user clicks my extension’s button:

let URL = 'https://smmry.com/https://www.cnn.com/2018/04/01/politics/ronald-kessler-jake-tapper-interview/index.html#&SM_LENGTH=7'
    fetch(URL)
        .then((resp) => resp.text())
        .then(responseText => {
           download("website_source.html", responseText)
        })

function download(filename, text) {

    var element = document.createElement('a');
    element.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(text));
    element.setAttribute('download', filename);

    element.style.display = 'none';
    document.body.appendChild(element);

    element.click();

    document.body.removeChild(element);
}

Here’s the source of the webpage: https://smmry.com/https://www.cnn.com/2018/04/01/politics/ronald-kessler-jake-tapper-interview/index.html#&SM_LENGTH=7

However, as you can see if you visit the webpage, sometimes the webpage takes a small amount of time (up to a few seconds) to summarize the article. It’s less noticeable on this article – but usually a pink loading bar will move up and down in the pink box until the summary is created and displayed on the website.

I believe my code is downloading the source of the website before it finishes summarizing the article, thus the HTML file my program downloads does not contain the summary of the article.

How can I make sure the fetch() request only downloads the content of the website once the website https://smmry.com has finished summarizing the article https://www.cnn.com/2018/04/01/politics/ronald-kessler-jake-tapper-interview/index.html.

How to use fetch to get the HTML source of a webpage in a firefox or chrome web extension?

I am trying to use fetch to get the HTML source of https://smmry.com/ in a Chrome/Firefox web extension.

Here is my manifest.json

{

    "manifest_version": 2,
    "name": "To_Be_Done",
    "version": "1.0",

    "description": "To_Be_Done",

    "permissions": [
        "tabs",
        "*://*.smmry.com/*"
    ],

    "icons": {
        "48": "icons/border-48.png"
    },

    "browser_action": {
        "browser_style": true,
        "default_popup": "popup/choose_page.html",
        "default_icon": {
            "16": "icons/news-icon-16.png",
            "32": "icons/news-icon-32.png"
        }
    }
}

Whenever, my extension’s button is clicked, I want to get the HTML source of smmry.com. However, I don’t know how to implement the fetch() method to do this. I’ve read through the documentation but I am still confused.

Can anyone show an example?

How can i fetch the data from emails in gmail using chorme extension.?

I want to make a chrome extension whixh can simply fetch the data (data like mail id thorugh which the mail is received, content of mail etc ) from mails in gmail. How can i do this?

webRequest API: How to get the requestId of a new request?

The chrome.webRequest API has the concept of a request ID (source: Chrome webRequest documention):

Request IDs

Each request is identified by a request ID. This ID is unique within a browser session and the context of an extension. It remains constant during the the life cycle of a request and can be used to match events for the same request. Note that several HTTP requests are mapped to one web request in case of HTTP redirection or HTTP authentication.

You can use it to correlate the requests even across redirects. But how do you initially get hold off the id when start a new request with fetch or XMLHttpRequest?

So far, I have not found anything better than to use the URL of the request as a way to make the initial link between the new request and the requestId. However, if there are overlapping requests to the same resource, this is not reliable.

Questions:

  • If you make a new request (either with fetch or XMLHttpRequest), how do you reliably get access to the requestId?
  • Does the fetch API or XMLHttpRequest API allow access to the requestId?

What I want to do is to use the functionality provided by the webRequest API to modify a single request, but I want to make sure that I do not accidentally modify other pending requests.

cookie disabled or private mode in chrome or firefox extension background request

I am creating a extension for Firefox or chrome like everliker I want to like my posts on my behalf with this extension.

Everliker does what I want but I want to write it because we can’t use this extensions pro plan we can’t purchase it (US boycotted us and google as well and the payment of everliker is base on google payment:|).

So I want to create it myself and it’s working well but just one problem!!
when I want to like posts:

var r = {
    method: "POST",
    headers: {
        Accept: "application/json, text/javascript, */*; q=0.01",
        "Content-Type": "application/x-www-form-urlencoded; charset=UTF-8",
        "X-CSRFToken": '9CdUSqRg9E48Yzcrl1DJfsjYYI8fEeci',
        "X-Instagram-Ajax": "1",
        'Origin': 'https://www.instagram.com',
        "X-Requested-With": "XMLHttpRequest"
    },
    credentials: "include"
};
console.log(fetch(likeUrl, r, n).then(this._toJson))

I get this error:

enter image description here
this is the error if picture doesn’t appear:

Error

This page could not be loaded. If you have cookies disabled in your
browser, or you are browsing in Private Mode, please try enabling
cookies or turning off Private Mode, and then retrying your action.

I don’t know how should be headers or how should enable cookies with headers as the error referring to this!

I will appreciate any suggestion or help Thanks

XMLHttpRequest – Don’t load resources?

I’m trying to scrape just the text of a website (trying to run regex on it), however I noticed that when the website contains a video that autoplays, it doesn’t resolve to “load” until the resource finished loading. Normally, I would want to listen to load, but I can’t so I resolved to trying to run regex on each readystatechange which isn’t very pretty.

const xhr = new XMLHttpRequest()

xhr.addEventListener('readystatechange', function (response) => {
  if (this.readyState > 2) {
    if (this.target.responseText.match(//)) {
      this.abort();
    }
  }
});

xhr.open('GET', 'https://www.youtube.com')
xhr.send()

Note that the url will be a bunch of random urls, can’t use api’s or experimental server technologies like overriding the mimetype. I usually use fetch but I don’t think it has what I wanted.